Text Generation
Transformers
GGUF
English
olmo2
unsloth
conversational
danielhanchen commited on
Commit
69839d9
·
verified ·
1 Parent(s): 8c3ecc4

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ OLMo-2-0425-1B-Instruct-UD-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
37
+ OLMo-2-0425-1B-Instruct-UD-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ OLMo-2-0425-1B-Instruct-UD-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ OLMo-2-0425-1B-Instruct-UD-Q2_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
40
+ OLMo-2-0425-1B-Instruct-UD-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
41
+ OLMo-2-0425-1B-Instruct-UD-Q4_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
OLMo-2-0425-1B-Instruct-UD-IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:340cf2523701a967c8236723c27c4342257d7420f193d289619bb4cfc46777d7
3
+ size 535647456
OLMo-2-0425-1B-Instruct-UD-IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5da3c8405462219ef0ec7eab83cb1250fb05ad3e15036b09a4308d62616fcb3b
3
+ size 517428448
OLMo-2-0425-1B-Instruct-UD-IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b26c8bba8fcee57ad9343efb9cbee5e1a85d5573797a74a70f2880f834629b32
3
+ size 641225952
OLMo-2-0425-1B-Instruct-UD-IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e1e0688ca856813820e4ef5cd4c494828d5048d5d43a6c24897848e943ea6a7
3
+ size 681530592
OLMo-2-0425-1B-Instruct-UD-Q2_K_XL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78be651ee5268a3a9044b3ea1ec8db5efa738ccd92f02a41ea2439cf52b3dd48
3
+ size 709186784
OLMo-2-0425-1B-Instruct-UD-Q4_K_XL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17fa14fdf1b4952d92a959c5d1549d4dc38dd45a82c71dbd854ce289f0d418fa
3
+ size 967202016
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ datasets:
8
+ - allenai/RLVR-MATH
9
+ base_model:
10
+ - allenai/OLMo-2-0425-1B-Instruct
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ ---
14
+
15
+ <img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px">
16
+
17
+ OLMo 2 1B Instruct April 2025 is post-trained variant of the [allenai/OLMo-2-0425-1B-RLVR1](https://huggingface.co/allenai/OLMo-2-0425-1B-RLVR1) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture-0225), further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-0425-1b-preference-mix), and final RLVR training on [this dataset](https://huggingface.co/datasets/allenai/RLVR-MATH).
18
+ Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
19
+ Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
20
+
21
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
22
+ These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs, and associated training details.
23
+
24
+
25
+ ## Model description
26
+
27
+ - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
28
+ - **Language(s) (NLP):** Primarily English
29
+ - **License:** Apache 2.0
30
+ - **Finetuned from model:** allenai/OLMo-2-0425-1B-RLVR1
31
+
32
+ ### Model Sources
33
+
34
+ - **Project Page:** https://allenai.org/olmo
35
+ - **Repositories:**
36
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
37
+ - Evaluation code: https://github.com/allenai/olmes
38
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
39
+ - **Paper:** https://arxiv.org/abs/2501.00656
40
+ - **Demo:** https://playground.allenai.org/
41
+
42
+ ## Installation
43
+
44
+ OLMo 2 1B is supported in transformers v4.48 or higher:
45
+ ```bash
46
+ pip install transformers>=4.48
47
+ ```
48
+
49
+ If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please
50
+
51
+ ## Using the model
52
+
53
+ ### Loading with HuggingFace
54
+
55
+ To load the model with HuggingFace, use the following snippet:
56
+ ```
57
+ from transformers import AutoModelForCausalLM
58
+
59
+ olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct")
60
+ ```
61
+
62
+ ### Chat template
63
+
64
+ *NOTE: This is different than previous OLMo 2 and Tülu 3 models due to a minor change in configuration. It does NOT have the bos token before the rest. Our other models have <|endoftext|> at the beginning of the chat template.*
65
+
66
+ The chat template for our models is formatted as:
67
+ ```
68
+ <|user|>
69
+ How are you doing?
70
+ <|assistant|>
71
+ I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
72
+ ```
73
+ Or with new lines expanded:
74
+ ```
75
+ <|user|>
76
+ How are you doing?
77
+ <|assistant|>
78
+ I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
79
+ ```
80
+ It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
81
+
82
+ ### Intermediate Checkpoints
83
+
84
+ To facilitate research on RL finetuning, we have released our intermediate checkpoints during the model's RLVR training.
85
+ The model weights are saved every 20 training steps, and can be accessible in the revisions of the HuggingFace repository.
86
+ For example, you can load with:
87
+ ```
88
+ olmo_model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B-Instruct", revision="step_200")
89
+ ```
90
+
91
+ ### Bias, Risks, and Limitations
92
+
93
+ The OLMo-2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
94
+
95
+
96
+ ## Performance
97
+
98
+ | Model | Average | AlpacaEval 2 LC | BBH | DROP | GSM8K | IFEval | MATH | MMLU | Safety | PopQA | TruthQA |
99
+ |-------|---------|-----------------|-----|------|-------|--------|------|------|--------|-------|---------|
100
+ | **OLMo 1B 0724** | 24.4 | 2.4 | 29.9 | 27.9 | 10.8 | 25.3 | 2.2 | 36.6 | 52.0 | 12.1 | 44.3 |
101
+ | **SmolLM2 1.7B** | 34.2 | 5.8 | 39.8 | 30.9 | 45.3 | 51.6 | 20.3 | 34.3 | 52.4 | 16.4 | 45.3 |
102
+ | **Gemma 3 1B** | 38.3 | 20.4 | 39.4 | 25.1 | 35.0 | 60.6 | 40.3 | 38.9 | 70.2 | 9.6 | 43.8 |
103
+ | **Llama 3.1 1B** | 39.3 | 10.1 | 40.2 | 32.2 | 45.4 | 54.0 | 21.6 | 46.7 | 87.2 | 13.8 | 41.5 |
104
+ | **Qwen 2.5 1.5B** | 41.7 | 7.4 | 45.8 | 13.4 | 66.2 | 44.2 | 40.6 | 59.7 | 77.6 | 15.5 | 46.5 |
105
+ | **---** | | | | | | | | | | | |
106
+ | **OLMo 2 1B SFT** | 36.9 | 2.4 | 32.8 | 33.8 | 52.1 | 50.5 | 13.2 | 36.4 | 93.2 | 12.7 | 42.1 |
107
+ | **OLMo 2 1B DPO** | 40.6 | 9.5 | 33.0 | 34.5 | 59.0 | 67.1 | 14.1 | 39.9 | 89.9 | 12.3 | 46.4 |
108
+ | **OLMo 2 1B** | 42.7 | 9.1 | 35.0 | 34.6 | 68.3 | 70.1 | 20.7 | 40.0 | 87.6 | 12.9 | 48.7 |
109
+
110
+
111
+
112
+ ## License and use
113
+
114
+ OLMo 2 is licensed under the Apache 2.0 license.
115
+ OLMo 2 is intended for research and educational use.
116
+ For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
117
+
118
+ ## Citation
119
+
120
+ ```bibtex
121
+ @article{olmo20242olmo2furious,
122
+ title={2 OLMo 2 Furious},
123
+ author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
124
+ year={2024},
125
+ eprint={2501.00656},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.CL},
128
+ url={https://arxiv.org/abs/2501.00656},
129
+ }
130
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Olmo2ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 100257,
8
+ "eos_token_id": 100257,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 2048,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 8192,
13
+ "max_position_embeddings": 4096,
14
+ "model_type": "olmo2",
15
+ "num_attention_heads": 16,
16
+ "num_hidden_layers": 16,
17
+ "num_key_value_heads": 16,
18
+ "pad_token_id": 100277,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_scaling": null,
21
+ "rope_theta": 500000,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.52.0.dev0",
25
+ "unsloth_fixed": true,
26
+ "use_cache": false,
27
+ "vocab_size": 100352
28
+ }