End of training
Browse files- README.md +5 -5
- model.safetensors +2 -2
- training_args.bin +1 -1
README.md
CHANGED
@@ -33,11 +33,11 @@ More information needed
|
|
33 |
### Training hyperparameters
|
34 |
|
35 |
The following hyperparameters were used during training:
|
36 |
-
- learning_rate:
|
37 |
-
- train_batch_size:
|
38 |
-
- eval_batch_size:
|
39 |
- seed: 42
|
40 |
-
- gradient_accumulation_steps:
|
41 |
- total_train_batch_size: 16
|
42 |
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
43 |
- lr_scheduler_type: linear
|
@@ -51,5 +51,5 @@ The following hyperparameters were used during training:
|
|
51 |
|
52 |
- Transformers 4.51.3
|
53 |
- Pytorch 2.6.0+cu124
|
54 |
-
- Datasets 3.5.
|
55 |
- Tokenizers 0.21.1
|
|
|
33 |
### Training hyperparameters
|
34 |
|
35 |
The following hyperparameters were used during training:
|
36 |
+
- learning_rate: 5e-05
|
37 |
+
- train_batch_size: 8
|
38 |
+
- eval_batch_size: 8
|
39 |
- seed: 42
|
40 |
+
- gradient_accumulation_steps: 2
|
41 |
- total_train_batch_size: 16
|
42 |
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
43 |
- lr_scheduler_type: linear
|
|
|
51 |
|
52 |
- Transformers 4.51.3
|
53 |
- Pytorch 2.6.0+cu124
|
54 |
+
- Datasets 3.5.1
|
55 |
- Tokenizers 0.21.1
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4f7677547fe98c2166d87308f1a6bae159b1841bf579b1ae1a904d7c913f0af0
|
3 |
+
size 4650578693
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5368
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cc08147fcba0244571702e8c4babdcb085c6a4dfa5b8ac634c199719d4ef7254
|
3 |
size 5368
|