File size: 5,955 Bytes
bfaa44c 9d7aea7 bfaa44c 5d04553 bfaa44c 9d7aea7 bfaa44c 5764947 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
---
library_name: transformers
license: cc-by-sa-4.0
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
pipeline_tag: text-generation
---
# Helium-1-2b
<img src="https://huggingface.co/kyutai/moshi-1-2b/resolve/main/helium_sticker.png" width="400">
## Model Description
<!-- Provide a longer summary of what this model is. -->
Helium-1 is a lightweight language model with 2B parameters, targeting edge and mobile devices.
It supports the 24 official languages of the European Union.
⚠️ Helium-1 is a base model, which was not fine-tuned to follow instructions or human preferences.
For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
- **Developed by:** Kyutai
- **Model type:** Large Language Model
- **Language(s) (NLP):** Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
- **License:** CC-BY-SA 4.0
- **Terms of use:** As a model distilled from Gemma 2, Helium 1 is subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
<!-- ### Model Sources [optional]
Provide the basic links for the model.
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The intended use of the Helium model is research and development of natural language processing systems, including but not limited to language generation and understanding.
The model can be used in Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model should not be used in other languages than the ones on which it was trained.
The model is not intended to be used for any malicious or illegal activities of any kind.
The model was not fine-tuned to follow instructions, and thus should not be used as such.
## Bias, Risks, and Limitations
Helium-1 is a base language model, which was not aligned to human preferences.
As such, the model can generate incorrect, biased, harmful or generally unhelpful content.
Thus, the model should not be used for downstream applications without further alignment, evaluations and mitigations of risks.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import pipeline
model_id = "kyutai/helium-1-2b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
text = pipe("Hello, today is a great day to")
```
## Training Details
### Training Data
Helium-1 was trained on data from Common Crawl, which was preprocessed with the dactory library.
<!--#### Training Hyperparameters
- **Training regime:** [More Information Needed] -->
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Testing Data
The model was evaluated on MMLU, TriviaQA, NaturalQuestions, ARC Easy & Challenge, Open Book QA, Common Sense QA,
Physical Interaction QA, Social Interaction QA, HellaSwag, WinoGrande, Multilingual Knowledge QA, FLORES 200.
#### Metrics
We report accuracy on MMLU, ARC, OBQA, CSQA, PIQA, SIQA, HellaSwag, WinoGrande.
We report exact match on TriviaQA, NQ and MKQA.
We report BLEU on FLORES.
#### English Results
| Benchmark | Helium-1 | HF SmolLM2 (1.7B) | Gemma-2 (2.6B) | Llama-3.2 (3B) | Qwen2.5 (1.5B) |
|--------------|:------:|:------:|:------:|:------:|:------:|
| | | | | | |
| MMLU | 52.0 | 50.4 | 53.1 | 56.6 | 61.0 |
| NQ | 16.5 | 15.1 | 17.7 | 22.0 | 13.1 |
| TQA | 46.5 | 45.4 | 49.9 | 53.6 | 35.9 |
| ARC E | 82.2 | 81.8 | 81.1 | 84.6 | 89.7 |
| ARC C | 64.6 | 64.7 | 66.0 | 69.0 | 77.2 |
| OBQA | 65.4 | 61.4 | 64.6 | 68.4 | 73.8 |
| CSQA | 63.6 | 59.0 | 64.4 | 65.4 | 72.4 |
| PIQA | 78.5 | 77.7 | 79.8 | 78.9 | 76.0 |
| SIQA | 62.3 | 57.5 | 61.9 | 63.8 | 68.7 |
| HS | 73.6 | 73.2 | 74.7 | 76.9 | 67.5 |
| WG | 66.9 | 65.6 | 71.2 | 72.0 | 64.8 |
| | | | | | |
| Average | 61.1 | 59.3 | 62.2 | 64.7 | 63.6 |
#### Multilingual Results
| Benchmark | Helium-1 | Gemma-2 (2.6B) | Llama-3.2 (3B) |
|--------------|:------:|:------:|:------:|
| | | | | | |
| ARC E | 71.1 | 65.8 | 68.2 |
| ARC C | 54.8 | 51.1 | 52.6 |
| MMLU | 44.8 | 43.1 | 45.3 |
| HS | 51.9 | 49.9 | 48.4 |
| FLORES | 20.6 | 21.9 | 19.8 |
| MKQA | 16.5 | 17.2 | 19.7 |
| | | | | | |
| Average | 43.3 | 41.5 | 42.3 |
## Technical Specifications
### Model Architecture and Objective
| Hyperparameter | Value |
|--------------|:------:|
| Model dimension | 2048 |
| MLP dimension | 8192 |
| Layers | 28 |
| Heads | 16 |
| RoPE theta | 20,000 |
| Context size | 4096 |
| Max learning rate | 2.4e-04 |
| Total steps | 500,000 |
| Weight decay | 0.1 |
| Gradient clip | 1.0 |
#### Hardware
The model was trained on 64 NVIDIA H100 Tensor Core GPUs.
#### Software
The model was trained using Jax.
## Citation
Blog post: [Helium 1: a modular and multilingual LLM](https://kyutai.org/2025/04/30/helium.html). |