wcvz commited on
Commit
0847717
·
verified ·
1 Parent(s): 012c248

Model save

Browse files
Files changed (2) hide show
  1. README.md +92 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: facebook/esm2_t30_150M_UR50D
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: esm2_t130_150M-lora-classifier_2024-04-25_23-13-58
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # esm2_t130_150M-lora-classifier_2024-04-25_23-13-58
18
+
19
+ This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4754
22
+ - Accuracy: 0.8809
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 0.0005701568055793089
42
+ - train_batch_size: 12
43
+ - eval_batch_size: 12
44
+ - seed: 8893
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - num_epochs: 30
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | 0.587 | 1.0 | 128 | 0.6443 | 0.5957 |
55
+ | 0.4373 | 2.0 | 256 | 0.6115 | 0.6699 |
56
+ | 0.3057 | 3.0 | 384 | 0.4991 | 0.7812 |
57
+ | 0.2758 | 4.0 | 512 | 0.4353 | 0.8242 |
58
+ | 0.4801 | 5.0 | 640 | 0.3155 | 0.8691 |
59
+ | 0.2161 | 6.0 | 768 | 0.3821 | 0.8301 |
60
+ | 0.178 | 7.0 | 896 | 0.2889 | 0.875 |
61
+ | 0.3202 | 8.0 | 1024 | 0.2716 | 0.8945 |
62
+ | 0.192 | 9.0 | 1152 | 0.3002 | 0.8848 |
63
+ | 0.0997 | 10.0 | 1280 | 0.3142 | 0.8828 |
64
+ | 0.0146 | 11.0 | 1408 | 0.3388 | 0.8965 |
65
+ | 0.0777 | 12.0 | 1536 | 0.4100 | 0.8711 |
66
+ | 0.0337 | 13.0 | 1664 | 0.3152 | 0.8848 |
67
+ | 0.4337 | 14.0 | 1792 | 0.4699 | 0.8848 |
68
+ | 0.2544 | 15.0 | 1920 | 0.3347 | 0.8867 |
69
+ | 0.0166 | 16.0 | 2048 | 0.4547 | 0.8770 |
70
+ | 0.0084 | 17.0 | 2176 | 0.3627 | 0.8867 |
71
+ | 0.3829 | 18.0 | 2304 | 0.3663 | 0.8887 |
72
+ | 0.096 | 19.0 | 2432 | 0.3994 | 0.8848 |
73
+ | 0.017 | 20.0 | 2560 | 0.4222 | 0.8867 |
74
+ | 0.0093 | 21.0 | 2688 | 0.4519 | 0.8906 |
75
+ | 0.0035 | 22.0 | 2816 | 0.4575 | 0.8828 |
76
+ | 0.0072 | 23.0 | 2944 | 0.4675 | 0.8828 |
77
+ | 0.0306 | 24.0 | 3072 | 0.4675 | 0.8867 |
78
+ | 0.1433 | 25.0 | 3200 | 0.4795 | 0.8828 |
79
+ | 0.0073 | 26.0 | 3328 | 0.4755 | 0.8789 |
80
+ | 0.3764 | 27.0 | 3456 | 0.4759 | 0.8809 |
81
+ | 0.02 | 28.0 | 3584 | 0.4723 | 0.8828 |
82
+ | 0.0061 | 29.0 | 3712 | 0.4736 | 0.8809 |
83
+ | 0.0042 | 30.0 | 3840 | 0.4754 | 0.8809 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - PEFT 0.10.0
89
+ - Transformers 4.39.3
90
+ - Pytorch 2.2.1
91
+ - Datasets 2.16.1
92
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47f5a54a9d98bd94f6f5f19527260f033016c580de22de4e1cde4c337b026b66
3
  size 3514768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5003f69de5236da49ed65d5c64058e641b7329e663c4adde8f0e358c210156a
3
  size 3514768