lmz commited on
Commit
5578bf9
·
verified ·
1 Parent(s): 58df687

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: cc-by-sa-4.0
4
+ language:
5
+ - bg
6
+ - cs
7
+ - da
8
+ - de
9
+ - el
10
+ - en
11
+ - es
12
+ - et
13
+ - fi
14
+ - fr
15
+ - ga
16
+ - hr
17
+ - hu
18
+ - it
19
+ - lt
20
+ - lv
21
+ - mt
22
+ - nl
23
+ - pl
24
+ - pt
25
+ - ro
26
+ - sk
27
+ - sl
28
+ - sv
29
+ pipeline_tag: text-generation
30
+ ---
31
+
32
+ # Helium-1-2b
33
+
34
+ <img src="https://huggingface.co/kyutai/moshi-1-2b/resolve/main/helium_sticker.png" width="400">
35
+
36
+
37
+ ## Model Description
38
+
39
+ <!-- Provide a longer summary of what this model is. -->
40
+
41
+ Helium-1 is a lightweight language model with 2B parameters, targeting edge and mobile devices.
42
+ It supports the 24 official languages of the European Union.
43
+
44
+ ⚠️ Helium-1 is a base model, which was not fine-tuned to follow instructions or human preferences.
45
+ For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
46
+
47
+ - **Developed by:** Kyutai
48
+ - **Model type:** Large Language Model
49
+ - **Language(s) (NLP):** Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
50
+ - **License:** CC-BY-SA 4.0
51
+ - **Terms of use:** As a model distilled from Gemma 2, Helium 1 is subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
52
+
53
+ <!-- ### Model Sources [optional]
54
+
55
+ Provide the basic links for the model.
56
+
57
+ - **Repository:** [More Information Needed]
58
+ - **Paper [optional]:** [More Information Needed]
59
+ - **Demo [optional]:** [More Information Needed] -->
60
+
61
+ ## Uses
62
+
63
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
64
+
65
+ ### Direct Use
66
+
67
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
68
+
69
+ The intended use of the Helium model is research and development of natural language processing systems, including but not limited to language generation and understanding.
70
+ The model can be used in Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
71
+ For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
72
+
73
+ ### Out-of-Scope Use
74
+
75
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
76
+
77
+ The model should not be used in other languages than the ones on which it was trained.
78
+ The model is not intended to be used for any malicious or illegal activities of any kind.
79
+ The model was not fine-tuned to follow instructions, and thus should not be used as such.
80
+
81
+ ## Bias, Risks, and Limitations
82
+
83
+ Helium-1 is a base language model, which was not aligned to human preferences.
84
+ As such, the model can generate incorrect, biased, harmful or generally unhelpful content.
85
+ Thus, the model should not be used for downstream applications without further alignment, evaluations and mitigations of risks.
86
+
87
+ ## How to Get Started with the Model
88
+
89
+ Use the code below to get started with the model.
90
+
91
+ ```python
92
+ import torch
93
+ from transformers import pipeline
94
+
95
+ model_id = "kyutai/helium-1-2b"
96
+
97
+ pipe = pipeline(
98
+ "text-generation",
99
+ model=model_id,
100
+ torch_dtype=torch.bfloat16,
101
+ device_map="auto"
102
+ )
103
+
104
+ text = pipe("Hello, today is a great day to")
105
+ ```
106
+
107
+ ## Training Details
108
+
109
+ ### Training Data
110
+
111
+ Helium-1 was trained on data from Common Crawl, which was preprocessed with the dactory library.
112
+
113
+ <!--#### Training Hyperparameters
114
+
115
+ - **Training regime:** [More Information Needed] -->
116
+
117
+ <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
118
+
119
+ ## Evaluation
120
+
121
+ <!-- This section describes the evaluation protocols and provides the results. -->
122
+
123
+ #### Testing Data
124
+
125
+ The model was evaluated on MMLU, TriviaQA, NaturalQuestions, ARC Easy & Challenge, Open Book QA, Common Sense QA,
126
+ Physical Interaction QA, Social Interaction QA, HellaSwag, WinoGrande, Multilingual Knowledge QA, FLORES 200.
127
+
128
+ #### Metrics
129
+
130
+ We report accuracy on MMLU, ARC, OBQA, CSQA, PIQA, SIQA, HellaSwag, WinoGrande.
131
+ We report exact match on TriviaQA, NQ and MKQA.
132
+ We report BLEU on FLORES.
133
+
134
+ #### English Results
135
+
136
+ | Benchmark | Helium-1 | HF SmolLM2 (1.7B) | Gemma-2 (2.6B) | Llama-3.2 (3B) | Qwen2.5 (1.5B) |
137
+ |--------------|:------:|:------:|:------:|:------:|:------:|
138
+ | | | | | | |
139
+ | MMLU | 52.0 | 50.4 | 53.1 | 56.6 | 61.0 |
140
+ | NQ | 16.5 | 15.1 | 17.7 | 22.0 | 13.1 |
141
+ | TQA | 46.5 | 45.4 | 49.9 | 53.6 | 35.9 |
142
+ | ARC E | 82.2 | 81.8 | 81.1 | 84.6 | 89.7 |
143
+ | ARC C | 64.6 | 64.7 | 66.0 | 69.0 | 77.2 |
144
+ | OBQA | 65.4 | 61.4 | 64.6 | 68.4 | 73.8 |
145
+ | CSQA | 63.6 | 59.0 | 64.4 | 65.4 | 72.4 |
146
+ | PIQA | 78.5 | 77.7 | 79.8 | 78.9 | 76.0 |
147
+ | SIQA | 62.3 | 57.5 | 61.9 | 63.8 | 68.7 |
148
+ | HS | 73.6 | 73.2 | 74.7 | 76.9 | 67.5 |
149
+ | WG | 66.9 | 65.6 | 71.2 | 72.0 | 64.8 |
150
+ | | | | | | |
151
+ | Average | 61.1 | 59.3 | 62.2 | 64.7 | 63.6 |
152
+
153
+ #### Multilingual Results
154
+
155
+ | Benchmark | Helium-1 | Gemma-2 (2.6B) | Llama-3.2 (3B) |
156
+ |--------------|:------:|:------:|:------:|
157
+ | | | | | | |
158
+ | ARC E | 71.1 | 65.8 | 68.2 |
159
+ | ARC C | 54.8 | 51.1 | 52.6 |
160
+ | MMLU | 44.8 | 43.1 | 45.3 |
161
+ | HS | 51.9 | 49.9 | 48.4 |
162
+ | FLORES | 20.6 | 21.9 | 19.8 |
163
+ | MKQA | 16.5 | 17.2 | 19.7 |
164
+ | | | | | | |
165
+ | Average | 43.3 | 41.5 | 42.3 |
166
+
167
+
168
+ ## Technical Specifications
169
+
170
+ ### Model Architecture and Objective
171
+
172
+ | Hyperparameter | Value |
173
+ |--------------|:------:|
174
+ | Model dimension | 2048 |
175
+ | MLP dimension | 8192 |
176
+ | Layers | 28 |
177
+ | Heads | 16 |
178
+ | RoPE theta | 20,000 |
179
+ | Context size | 4096 |
180
+ | Max learning rate | 2.4e-04 |
181
+ | Total steps | 500,000 |
182
+ | Weight decay | 0.1 |
183
+ | Gradient clip | 1.0 |
184
+
185
+ #### Hardware
186
+
187
+ The model was trained on 64 NVIDIA H100 Tensor Core GPUs.
188
+
189
+ #### Software
190
+
191
+ The model was trained using Jax.
192
+
193
+ ## Citation
194
+
195
+ Blog post: [Helium 1: a modular and multilingual LLM](https://kyutai.org/2025/04/30/helium.html).