|
--- |
|
base_model: meta-llama/Llama-2-7b-hf |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: sparse_llama_7b_hf2_refined_web_50p_2024-03-27 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# sparse_llama_7b_hf2_refined_web_50p_2024-03-27 |
|
|
|
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.0766 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 4 |
|
- seed: 0 |
|
- distributed_type: multi-GPU |
|
- num_devices: 4 |
|
- gradient_accumulation_steps: 8 |
|
- total_train_batch_size: 32 |
|
- total_eval_batch_size: 16 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 1100 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 2.1821 | 0.01 | 25 | 2.2632 | |
|
| 2.1824 | 0.02 | 50 | 2.2610 | |
|
| 2.2916 | 0.02 | 75 | 2.2561 | |
|
| 2.2562 | 0.03 | 100 | 2.2484 | |
|
| 2.3387 | 0.04 | 125 | 2.2453 | |
|
| 2.1762 | 0.05 | 150 | 2.2402 | |
|
| 2.1439 | 0.06 | 175 | 2.2353 | |
|
| 2.3081 | 0.06 | 200 | 2.2326 | |
|
| 2.268 | 0.07 | 225 | 2.2300 | |
|
| 2.2193 | 0.08 | 250 | 2.2303 | |
|
| 2.1589 | 0.09 | 275 | 2.2296 | |
|
| 2.1932 | 0.1 | 300 | 2.2276 | |
|
| 2.2406 | 0.1 | 325 | 2.2271 | |
|
| 2.2102 | 0.11 | 350 | 2.2289 | |
|
| 2.1311 | 0.12 | 375 | 2.2272 | |
|
| 2.2318 | 0.13 | 400 | 2.2269 | |
|
| 2.2155 | 0.14 | 425 | 2.2273 | |
|
| 2.1799 | 0.14 | 450 | 2.2267 | |
|
| 2.252 | 0.15 | 475 | 2.2250 | |
|
| 2.2588 | 0.16 | 500 | 2.2262 | |
|
| 2.1677 | 0.17 | 525 | 2.2271 | |
|
| 2.163 | 0.18 | 550 | 2.2264 | |
|
| 2.2783 | 0.18 | 575 | 2.2251 | |
|
| 2.1625 | 0.19 | 600 | 2.2253 | |
|
| 2.1906 | 0.2 | 625 | 2.2251 | |
|
| 2.2748 | 0.21 | 650 | 2.2251 | |
|
| 2.171 | 0.22 | 675 | 2.2249 | |
|
| 2.1929 | 0.22 | 700 | 2.2252 | |
|
| 2.2203 | 0.23 | 725 | 2.2232 | |
|
| 2.1143 | 0.24 | 750 | 2.2239 | |
|
| 2.1969 | 0.25 | 775 | 2.2230 | |
|
| 2.2492 | 0.26 | 800 | 2.2233 | |
|
| 2.1988 | 0.26 | 825 | 2.2240 | |
|
| 2.1546 | 0.27 | 850 | 2.2245 | |
|
| 2.1605 | 0.28 | 875 | 2.2229 | |
|
| 2.1417 | 0.29 | 900 | 2.2224 | |
|
| 2.3172 | 0.3 | 925 | 2.2247 | |
|
| 2.2799 | 0.3 | 950 | 2.2240 | |
|
| 2.2258 | 0.31 | 975 | 2.2221 | |
|
| 2.1175 | 0.32 | 1000 | 2.2216 | |
|
| 2.2314 | 0.33 | 1025 | 2.2227 | |
|
| 2.1956 | 0.34 | 1050 | 2.2211 | |
|
| 2.1695 | 0.34 | 1075 | 2.2206 | |
|
| 2.1658 | 0.35 | 1100 | 2.2207 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.40.0.dev0 |
|
- Pytorch 2.1.1+cu121 |
|
- Datasets 2.15.0 |
|
- Tokenizers 0.15.2 |
|
|