imdatta0 commited on
Commit
a58647f
·
verified ·
1 Parent(s): 7c76e29

End of training

Browse files
Files changed (2) hide show
  1. README.md +50 -50
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/llama-3-8b-bnb-4bit
3
  library_name: peft
4
  license: llama3
5
  tags:
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # Meta-Llama-3-8B_metamath_ortho
17
 
18
- This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4793
21
 
22
  ## Model description
23
 
@@ -51,53 +51,53 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
- | 0.9642 | 0.0211 | 13 | 0.7175 |
55
- | 0.6722 | 0.0421 | 26 | 0.7035 |
56
- | 0.6762 | 0.0632 | 39 | 0.6860 |
57
- | 0.6612 | 0.0842 | 52 | 0.6867 |
58
- | 0.6304 | 0.1053 | 65 | 0.6793 |
59
- | 0.641 | 0.1264 | 78 | 0.6730 |
60
- | 0.6461 | 0.1474 | 91 | 0.6656 |
61
- | 0.6364 | 0.1685 | 104 | 0.6571 |
62
- | 0.6158 | 0.1896 | 117 | 0.6516 |
63
- | 0.61 | 0.2106 | 130 | 0.6491 |
64
- | 0.6084 | 0.2317 | 143 | 0.6468 |
65
- | 0.6196 | 0.2527 | 156 | 0.6429 |
66
- | 0.6142 | 0.2738 | 169 | 0.6286 |
67
- | 0.6078 | 0.2949 | 182 | 0.6279 |
68
- | 0.5948 | 0.3159 | 195 | 0.6267 |
69
- | 0.5707 | 0.3370 | 208 | 0.6231 |
70
- | 0.5863 | 0.3580 | 221 | 0.6154 |
71
- | 0.5869 | 0.3791 | 234 | 0.6113 |
72
- | 0.5955 | 0.4002 | 247 | 0.6054 |
73
- | 0.5681 | 0.4212 | 260 | 0.5961 |
74
- | 0.5761 | 0.4423 | 273 | 0.5900 |
75
- | 0.5772 | 0.4633 | 286 | 0.5860 |
76
- | 0.5691 | 0.4844 | 299 | 0.5770 |
77
- | 0.55 | 0.5055 | 312 | 0.5741 |
78
- | 0.5477 | 0.5265 | 325 | 0.5606 |
79
- | 0.5414 | 0.5476 | 338 | 0.5572 |
80
- | 0.5303 | 0.5687 | 351 | 0.5526 |
81
- | 0.5104 | 0.5897 | 364 | 0.5433 |
82
- | 0.5218 | 0.6108 | 377 | 0.5369 |
83
- | 0.5202 | 0.6318 | 390 | 0.5319 |
84
- | 0.509 | 0.6529 | 403 | 0.5278 |
85
- | 0.513 | 0.6740 | 416 | 0.5193 |
86
- | 0.4983 | 0.6950 | 429 | 0.5144 |
87
- | 0.4979 | 0.7161 | 442 | 0.5110 |
88
- | 0.4884 | 0.7371 | 455 | 0.5047 |
89
- | 0.4785 | 0.7582 | 468 | 0.5013 |
90
- | 0.472 | 0.7793 | 481 | 0.4981 |
91
- | 0.4662 | 0.8003 | 494 | 0.4941 |
92
- | 0.4958 | 0.8214 | 507 | 0.4898 |
93
- | 0.4817 | 0.8424 | 520 | 0.4869 |
94
- | 0.4647 | 0.8635 | 533 | 0.4846 |
95
- | 0.484 | 0.8846 | 546 | 0.4827 |
96
- | 0.4686 | 0.9056 | 559 | 0.4813 |
97
- | 0.4665 | 0.9267 | 572 | 0.4805 |
98
- | 0.4754 | 0.9478 | 585 | 0.4799 |
99
- | 0.4741 | 0.9688 | 598 | 0.4793 |
100
- | 0.4763 | 0.9899 | 611 | 0.4793 |
101
 
102
 
103
  ### Framework versions
 
1
  ---
2
+ base_model: unsloth/llama-3-8b
3
  library_name: peft
4
  license: llama3
5
  tags:
 
15
 
16
  # Meta-Llama-3-8B_metamath_ortho
17
 
18
+ This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4760
21
 
22
  ## Model description
23
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:------:|:----:|:---------------:|
54
+ | 0.9348 | 0.0211 | 13 | 0.7026 |
55
+ | 0.6609 | 0.0421 | 26 | 0.6960 |
56
+ | 0.6695 | 0.0632 | 39 | 0.6785 |
57
+ | 0.6578 | 0.0842 | 52 | 0.6770 |
58
+ | 0.6222 | 0.1053 | 65 | 0.6748 |
59
+ | 0.6331 | 0.1264 | 78 | 0.6662 |
60
+ | 0.6413 | 0.1474 | 91 | 0.6609 |
61
+ | 0.631 | 0.1685 | 104 | 0.6538 |
62
+ | 0.6115 | 0.1896 | 117 | 0.6490 |
63
+ | 0.6097 | 0.2106 | 130 | 0.6458 |
64
+ | 0.6052 | 0.2317 | 143 | 0.6533 |
65
+ | 0.6131 | 0.2527 | 156 | 0.6335 |
66
+ | 0.6012 | 0.2738 | 169 | 0.6165 |
67
+ | 0.6002 | 0.2949 | 182 | 0.6204 |
68
+ | 0.5849 | 0.3159 | 195 | 0.6220 |
69
+ | 0.5689 | 0.3370 | 208 | 0.6159 |
70
+ | 0.5781 | 0.3580 | 221 | 0.6110 |
71
+ | 0.5765 | 0.3791 | 234 | 0.6027 |
72
+ | 0.5899 | 0.4002 | 247 | 0.5983 |
73
+ | 0.5638 | 0.4212 | 260 | 0.5905 |
74
+ | 0.5716 | 0.4423 | 273 | 0.5874 |
75
+ | 0.5729 | 0.4633 | 286 | 0.5809 |
76
+ | 0.5691 | 0.4844 | 299 | 0.5729 |
77
+ | 0.5441 | 0.5055 | 312 | 0.5659 |
78
+ | 0.5468 | 0.5265 | 325 | 0.5584 |
79
+ | 0.536 | 0.5476 | 338 | 0.5544 |
80
+ | 0.5277 | 0.5687 | 351 | 0.5474 |
81
+ | 0.5052 | 0.5897 | 364 | 0.5397 |
82
+ | 0.5185 | 0.6108 | 377 | 0.5309 |
83
+ | 0.5161 | 0.6318 | 390 | 0.5262 |
84
+ | 0.5056 | 0.6529 | 403 | 0.5227 |
85
+ | 0.5091 | 0.6740 | 416 | 0.5164 |
86
+ | 0.492 | 0.6950 | 429 | 0.5115 |
87
+ | 0.4936 | 0.7161 | 442 | 0.5070 |
88
+ | 0.4818 | 0.7371 | 455 | 0.5005 |
89
+ | 0.4762 | 0.7582 | 468 | 0.4986 |
90
+ | 0.4685 | 0.7793 | 481 | 0.4938 |
91
+ | 0.4614 | 0.8003 | 494 | 0.4904 |
92
+ | 0.4942 | 0.8214 | 507 | 0.4870 |
93
+ | 0.4767 | 0.8424 | 520 | 0.4837 |
94
+ | 0.4589 | 0.8635 | 533 | 0.4819 |
95
+ | 0.4806 | 0.8846 | 546 | 0.4796 |
96
+ | 0.4647 | 0.9056 | 559 | 0.4782 |
97
+ | 0.461 | 0.9267 | 572 | 0.4773 |
98
+ | 0.4718 | 0.9478 | 585 | 0.4766 |
99
+ | 0.4684 | 0.9688 | 598 | 0.4761 |
100
+ | 0.4716 | 0.9899 | 611 | 0.4760 |
101
 
102
 
103
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6a9ec26d13b9665175ed8e3024aa20145f96282ca429202c45fc1b83f91faf3
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c71cdd27ba68ced2890f274b11a62ad179b0af38e86e45b029273ab84463dae
3
  size 83945296