shubham-jj commited on
Commit
e9bd3f8
·
verified ·
1 Parent(s): 8619fc6

Model save

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -34,12 +34,12 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
- - train_batch_size: 4
38
- - eval_batch_size: 4
39
  - seed: 42
40
- - gradient_accumulation_steps: 4
41
  - total_train_batch_size: 16
42
- - optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_ratio: 0.1
45
  - num_epochs: 5
@@ -50,8 +50,8 @@ The following hyperparameters were used during training:
50
 
51
  ### Framework versions
52
 
53
- - PEFT 0.14.0
54
- - Transformers 4.48.3
55
  - Pytorch 2.6.0+cu124
56
- - Datasets 3.2.0
57
- - Tokenizers 0.21.0
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
+ - train_batch_size: 1
38
+ - eval_batch_size: 1
39
  - seed: 42
40
+ - gradient_accumulation_steps: 16
41
  - total_train_batch_size: 16
42
+ - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_ratio: 0.1
45
  - num_epochs: 5
 
50
 
51
  ### Framework versions
52
 
53
+ - PEFT 0.15.2
54
+ - Transformers 4.51.3
55
  - Pytorch 2.6.0+cu124
56
+ - Datasets 3.5.1
57
+ - Tokenizers 0.21.1