Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +54 -40
README.md CHANGED
@@ -1,41 +1,55 @@
1
- ---
2
- library_name: peft
3
- license: apache-2.0
4
- base_model: Qwen/Qwen2.5-1.5B-Instruct
5
- tags:
6
- - axolotl
7
- - generated_from_trainer
8
- model-index:
9
- - name: 288c426f-055b-4e1b-9610-cc254dff3a12
10
- results: []
11
- ---
12
-
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
-
17
- # 288c426f-055b-4e1b-9610-cc254dff3a12
18
-
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.0066
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ### Framework versions
36
-
37
- - PEFT 0.13.2
38
- - Transformers 4.46.0
39
- - Pytorch 2.5.0+cu124
40
- - Datasets 3.0.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  - Tokenizers 0.20.1
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ language:
9
+ - zho
10
+ - eng
11
+ - fra
12
+ - spa
13
+ - por
14
+ - deu
15
+ - ita
16
+ - rus
17
+ - jpn
18
+ - kor
19
+ - vie
20
+ - tha
21
+ - ara
22
+ model-index:
23
+ - name: 288c426f-055b-4e1b-9610-cc254dff3a12
24
+ results: []
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+
31
+ # 288c426f-055b-4e1b-9610-cc254dff3a12
32
+
33
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.0066
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ### Framework versions
50
+
51
+ - PEFT 0.13.2
52
+ - Transformers 4.46.0
53
+ - Pytorch 2.5.0+cu124
54
+ - Datasets 3.0.1
55
  - Tokenizers 0.20.1