amphora commited on
Commit
8e15393
·
verified ·
1 Parent(s): 57b629b

Model save

Browse files
Files changed (2) hide show
  1. README.md +167 -0
  2. generation_config.json +14 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - train_rationale_unseen_whole_re.jsonl
10
+ model-index:
11
+ - name: merged-bench-0417-1
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.8.0`
22
+ ```yaml
23
+ base_model: Qwen/Qwen2.5-7B-Instruct
24
+ model_type: AutoModelForCausalLM
25
+ tokenizer_type: AutoTokenizer
26
+ trust_remote_code: false
27
+
28
+ load_in_8bit: false
29
+ load_in_4bit: false
30
+ strict: false
31
+
32
+ output_dir: ./outputs/out
33
+ chat_template: qwen_25
34
+ datasets:
35
+ - path: train_rationale_unseen_whole_re.jsonl
36
+ type: chat_template
37
+ field_messages: messages
38
+ message_field_role: role
39
+ message_field_content: content
40
+ roles:
41
+ system:
42
+ - system
43
+ user:
44
+ - user
45
+ assistant:
46
+ - assistant
47
+
48
+ dataset_prepared_path: last_run_prepared
49
+ val_set_size: 0.005
50
+ output_dir: ./outputs/out
51
+ eval_sample_packing: False
52
+
53
+ sequence_len: 8192
54
+ sample_packing: False
55
+ pad_to_sequence_len: False
56
+
57
+ wandb_project: mergedbench
58
+ wandb_entity:
59
+ wandb_watch:
60
+ wandb_name:
61
+ wandb_log_model:
62
+ hub_model_id: amphora/merged-bench-0417-1
63
+
64
+ plugins:
65
+ - axolotl.integrations.liger.LigerPlugin
66
+ liger_rope: true
67
+ liger_rms_norm: true
68
+ liger_swiglu: true
69
+ liger_fused_linear_cross_entropy: true
70
+
71
+ gradient_accumulation_steps: 4
72
+ micro_batch_size: 8
73
+ eval_batch_size: 4
74
+ num_epochs: 3
75
+ optimizer: paged_adamw_8bit
76
+ lr_scheduler: cosine
77
+ learning_rate: 2e-5
78
+
79
+ train_on_inputs: false
80
+ group_by_length: false
81
+ bf16: auto
82
+ fp16:
83
+ tf32: false
84
+
85
+ gradient_checkpointing: true
86
+ gradient_checkpointing_kwargs:
87
+ use_reentrant: false
88
+ early_stopping_patience:
89
+ resume_from_checkpoint:
90
+ logging_steps: 1
91
+ xformers_attention:
92
+ flash_attention: true
93
+
94
+ warmup_steps: 30
95
+ evals_per_epoch: 3
96
+ eval_max_new_tokens: 128
97
+ eval_table_size:
98
+ saves_per_epoch: 1
99
+ debug:
100
+ deepspeed: deepspeed_configs/zero3_bf16.json
101
+ weight_decay: 0.01
102
+ fsdp:
103
+ fsdp_config:
104
+ special_tokens:
105
+ ```
106
+
107
+ </details><br>
108
+
109
+ # merged-bench-0417-1
110
+
111
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the train_rationale_unseen_whole_re.jsonl dataset.
112
+ It achieves the following results on the evaluation set:
113
+ - Loss: 0.4004
114
+
115
+ ## Model description
116
+
117
+ More information needed
118
+
119
+ ## Intended uses & limitations
120
+
121
+ More information needed
122
+
123
+ ## Training and evaluation data
124
+
125
+ More information needed
126
+
127
+ ## Training procedure
128
+
129
+ ### Training hyperparameters
130
+
131
+ The following hyperparameters were used during training:
132
+ - learning_rate: 2e-05
133
+ - train_batch_size: 8
134
+ - eval_batch_size: 4
135
+ - seed: 42
136
+ - distributed_type: multi-GPU
137
+ - num_devices: 4
138
+ - gradient_accumulation_steps: 4
139
+ - total_train_batch_size: 128
140
+ - total_eval_batch_size: 16
141
+ - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
142
+ - lr_scheduler_type: cosine
143
+ - lr_scheduler_warmup_steps: 30
144
+ - num_epochs: 3.0
145
+
146
+ ### Training results
147
+
148
+ | Training Loss | Epoch | Step | Validation Loss |
149
+ |:-------------:|:------:|:----:|:---------------:|
150
+ | 1.7481 | 0.0081 | 1 | 1.7361 |
151
+ | 0.4091 | 0.3313 | 41 | 0.4241 |
152
+ | 0.3857 | 0.6626 | 82 | 0.4033 |
153
+ | 0.3516 | 0.9939 | 123 | 0.3825 |
154
+ | 0.23 | 1.3313 | 164 | 0.3929 |
155
+ | 0.2441 | 1.6626 | 205 | 0.3659 |
156
+ | 0.2118 | 1.9939 | 246 | 0.3628 |
157
+ | 0.1137 | 2.3313 | 287 | 0.4025 |
158
+ | 0.1183 | 2.6626 | 328 | 0.4001 |
159
+ | 0.1077 | 2.9939 | 369 | 0.4004 |
160
+
161
+
162
+ ### Framework versions
163
+
164
+ - Transformers 4.51.3
165
+ - Pytorch 2.6.0+cu124
166
+ - Datasets 3.5.0
167
+ - Tokenizers 0.21.1
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.7,
11
+ "top_k": 20,
12
+ "top_p": 0.8,
13
+ "transformers_version": "4.51.3"
14
+ }