Files changed (1) hide show
  1. README.md +231 -219
README.md CHANGED
@@ -1,219 +1,231 @@
1
- ---
2
- license: apache-2.0
3
- pipeline_tag: text-generation
4
- language:
5
- - en
6
- license_link: LICENSE
7
- base_model: Qwen/Qwen2.5-0.5B
8
- quantized_by: bartowski
9
- tags:
10
- - llamafile
11
- - chat
12
- ---
13
-
14
- # Qwen 2.5 Instruct 0.5B - llamafile
15
-
16
- - Model creator: [Qwen](https://huggingface.co/Qwen/)
17
- - Original model: [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/)
18
-
19
- Mozilla packaged the Qwen 2.5 models into executable weights that we
20
- call [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This gives
21
- you the easiest fastest way to use the model on Linux, MacOS, Windows,
22
- FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
23
-
24
- *Software Last Updated: 2025-03-31*
25
-
26
- *Llamafile Version: 0.9.2*
27
-
28
- ## Quickstart
29
-
30
- To get started, you need both the Qwen 2.5 weights, and the llamafile
31
- software. Both of them are included in a single file, which can be
32
- downloaded and run as follows:
33
-
34
- ```
35
- wget https://huggingface.co/Mozilla/Qwen2.5-0.5B-Instruct-llamafile/resolve/main/Qwen2.5-0.5B-Instruct-Q6_K.llamafile
36
- chmod +x Qwen2.5-0.5B-Instruct-Q6_K.llamafile
37
- ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile
38
- ```
39
-
40
- The default mode of operation for these llamafiles is our new command
41
- line chatbot interface.
42
-
43
- ## Usage
44
-
45
- You can use triple quotes to ask questions on multiple lines. You can
46
- pass commands like `/stats` and `/context` to see runtime status
47
- information. You can change the system prompt by passing the `-p "new
48
- system prompt"` flag. You can press CTRL-C to interrupt the model.
49
- Finally CTRL-D may be used to exit.
50
-
51
- If you prefer to use a web GUI, then a `--server` mode is provided, that
52
- will open a tab with a chatbot and completion interface in your browser.
53
- For additional help on how it may be used, pass the `--help` flag. The
54
- server also has an OpenAI API compatible completions endpoint that can
55
- be accessed via Python using the `openai` pip package.
56
-
57
- ```
58
- ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile --server
59
- ```
60
-
61
- An advanced CLI mode is provided that's useful for shell scripting. You
62
- can use it by passing the `--cli` flag. For additional help on how it
63
- may be used, pass the `--help` flag.
64
-
65
- ```
66
- ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
67
- ```
68
-
69
- ## Troubleshooting
70
-
71
- Having **trouble?** See the ["Gotchas"
72
- section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
73
- of the README.
74
-
75
- On Linux, the way to avoid run-detector errors is to install the APE
76
- interpreter.
77
-
78
- ```sh
79
- sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
80
- sudo chmod +x /usr/bin/ape
81
- sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
82
- sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
83
- ```
84
-
85
- On Windows there's a 4GB limit on executable sizes.
86
-
87
- ## Context Window
88
-
89
- This model has a max context window size of 128k tokens. By default, a
90
- context window size of 8192 tokens is used. You can ask llamafile
91
- to use the maximum context size by passing the `-c 0` flag. That's big
92
- enough for a small book. If you want to be able to have a conversation
93
- with your book, you can use the `-f book.txt` flag.
94
-
95
- ## GPU Acceleration
96
-
97
- On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
98
- the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
99
- driver needs to be installed if you own an NVIDIA GPU. On Windows, if
100
- you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
101
- the flags `--recompile --gpu amd` the first time you run your llamafile.
102
-
103
- On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
104
- perform matrix multiplications. This is open source software, but it
105
- doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
106
- installed on your system, then you can pass the `--recompile` flag to
107
- build a GGML CUDA library just for your system that uses cuBLAS. This
108
- ensures you get maximum performance.
109
-
110
- For further information, please see the [llamafile
111
- README](https://github.com/mozilla-ocho/llamafile/).
112
-
113
- ## About llamafile
114
-
115
- llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
116
- uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
117
- binaries that run on the stock installs of six OSes for both ARM64 and
118
- AMD64.
119
-
120
- ---
121
-
122
- # Qwen2.5-0.5B-Instruct
123
-
124
- ## Introduction
125
-
126
- Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
127
-
128
- - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
129
- - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
130
- - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
131
- - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
132
-
133
- **This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
134
- - Type: Causal Language Models
135
- - Training Stage: Pretraining & Post-training
136
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
137
- - Number of Parameters: 0.49B
138
- - Number of Paramaters (Non-Embedding): 0.36B
139
- - Number of Layers: 24
140
- - Number of Attention Heads (GQA): 14 for Q and 2 for KV
141
- - Context Length: Full 32,768 tokens and generation 8192 tokens
142
-
143
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
144
-
145
- ## Requirements
146
-
147
- The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
148
-
149
- With `transformers<4.37.0`, you will encounter the following error:
150
- ```
151
- KeyError: 'qwen2'
152
- ```
153
-
154
- ## Quickstart
155
-
156
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
157
-
158
- ```python
159
- from transformers import AutoModelForCausalLM, AutoTokenizer
160
-
161
- model_name = "Qwen/Qwen2.5-0.5B-Instruct"
162
-
163
- model = AutoModelForCausalLM.from_pretrained(
164
- model_name,
165
- torch_dtype="auto",
166
- device_map="auto"
167
- )
168
- tokenizer = AutoTokenizer.from_pretrained(model_name)
169
-
170
- prompt = "Give me a short introduction to large language model."
171
- messages = [
172
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
173
- {"role": "user", "content": prompt}
174
- ]
175
- text = tokenizer.apply_chat_template(
176
- messages,
177
- tokenize=False,
178
- add_generation_prompt=True
179
- )
180
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
181
-
182
- generated_ids = model.generate(
183
- **model_inputs,
184
- max_new_tokens=512
185
- )
186
- generated_ids = [
187
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
188
- ]
189
-
190
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
191
- ```
192
-
193
-
194
- ## Evaluation & Performance
195
-
196
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
197
-
198
- For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
199
-
200
- ## Citation
201
-
202
- If you find our work helpful, feel free to give us a cite.
203
-
204
- ```
205
- @misc{qwen2.5,
206
- title = {Qwen2.5: A Party of Foundation Models},
207
- url = {https://qwenlm.github.io/blog/qwen2.5/},
208
- author = {Qwen Team},
209
- month = {September},
210
- year = {2024}
211
- }
212
-
213
- @article{qwen2,
214
- title={Qwen2 Technical Report},
215
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
216
- journal={arXiv preprint arXiv:2407.10671},
217
- year={2024}
218
- }
219
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ license_link: LICENSE
19
+ base_model: Qwen/Qwen2.5-0.5B
20
+ quantized_by: bartowski
21
+ tags:
22
+ - llamafile
23
+ - chat
24
+ ---
25
+
26
+ # Qwen 2.5 Instruct 0.5B - llamafile
27
+
28
+ - Model creator: [Qwen](https://huggingface.co/Qwen/)
29
+ - Original model: [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/)
30
+
31
+ Mozilla packaged the Qwen 2.5 models into executable weights that we
32
+ call [llamafiles](https://github.com/Mozilla-Ocho/llamafile). This gives
33
+ you the easiest fastest way to use the model on Linux, MacOS, Windows,
34
+ FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
35
+
36
+ *Software Last Updated: 2025-03-31*
37
+
38
+ *Llamafile Version: 0.9.2*
39
+
40
+ ## Quickstart
41
+
42
+ To get started, you need both the Qwen 2.5 weights, and the llamafile
43
+ software. Both of them are included in a single file, which can be
44
+ downloaded and run as follows:
45
+
46
+ ```
47
+ wget https://huggingface.co/Mozilla/Qwen2.5-0.5B-Instruct-llamafile/resolve/main/Qwen2.5-0.5B-Instruct-Q6_K.llamafile
48
+ chmod +x Qwen2.5-0.5B-Instruct-Q6_K.llamafile
49
+ ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile
50
+ ```
51
+
52
+ The default mode of operation for these llamafiles is our new command
53
+ line chatbot interface.
54
+
55
+ ## Usage
56
+
57
+ You can use triple quotes to ask questions on multiple lines. You can
58
+ pass commands like `/stats` and `/context` to see runtime status
59
+ information. You can change the system prompt by passing the `-p "new
60
+ system prompt"` flag. You can press CTRL-C to interrupt the model.
61
+ Finally CTRL-D may be used to exit.
62
+
63
+ If you prefer to use a web GUI, then a `--server` mode is provided, that
64
+ will open a tab with a chatbot and completion interface in your browser.
65
+ For additional help on how it may be used, pass the `--help` flag. The
66
+ server also has an OpenAI API compatible completions endpoint that can
67
+ be accessed via Python using the `openai` pip package.
68
+
69
+ ```
70
+ ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile --server
71
+ ```
72
+
73
+ An advanced CLI mode is provided that's useful for shell scripting. You
74
+ can use it by passing the `--cli` flag. For additional help on how it
75
+ may be used, pass the `--help` flag.
76
+
77
+ ```
78
+ ./Qwen2.5-0.5B-Instruct-Q6_K.llamafile --cli -p 'four score and seven' --log-disable
79
+ ```
80
+
81
+ ## Troubleshooting
82
+
83
+ Having **trouble?** See the ["Gotchas"
84
+ section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
85
+ of the README.
86
+
87
+ On Linux, the way to avoid run-detector errors is to install the APE
88
+ interpreter.
89
+
90
+ ```sh
91
+ sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
92
+ sudo chmod +x /usr/bin/ape
93
+ sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
94
+ sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
95
+ ```
96
+
97
+ On Windows there's a 4GB limit on executable sizes.
98
+
99
+ ## Context Window
100
+
101
+ This model has a max context window size of 128k tokens. By default, a
102
+ context window size of 8192 tokens is used. You can ask llamafile
103
+ to use the maximum context size by passing the `-c 0` flag. That's big
104
+ enough for a small book. If you want to be able to have a conversation
105
+ with your book, you can use the `-f book.txt` flag.
106
+
107
+ ## GPU Acceleration
108
+
109
+ On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
110
+ the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
111
+ driver needs to be installed if you own an NVIDIA GPU. On Windows, if
112
+ you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
113
+ the flags `--recompile --gpu amd` the first time you run your llamafile.
114
+
115
+ On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
116
+ perform matrix multiplications. This is open source software, but it
117
+ doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
118
+ installed on your system, then you can pass the `--recompile` flag to
119
+ build a GGML CUDA library just for your system that uses cuBLAS. This
120
+ ensures you get maximum performance.
121
+
122
+ For further information, please see the [llamafile
123
+ README](https://github.com/mozilla-ocho/llamafile/).
124
+
125
+ ## About llamafile
126
+
127
+ llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
128
+ uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
129
+ binaries that run on the stock installs of six OSes for both ARM64 and
130
+ AMD64.
131
+
132
+ ---
133
+
134
+ # Qwen2.5-0.5B-Instruct
135
+
136
+ ## Introduction
137
+
138
+ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
139
+
140
+ - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
141
+ - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
142
+ - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
143
+ - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
144
+
145
+ **This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
146
+ - Type: Causal Language Models
147
+ - Training Stage: Pretraining & Post-training
148
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
149
+ - Number of Parameters: 0.49B
150
+ - Number of Paramaters (Non-Embedding): 0.36B
151
+ - Number of Layers: 24
152
+ - Number of Attention Heads (GQA): 14 for Q and 2 for KV
153
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
154
+
155
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
156
+
157
+ ## Requirements
158
+
159
+ The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
160
+
161
+ With `transformers<4.37.0`, you will encounter the following error:
162
+ ```
163
+ KeyError: 'qwen2'
164
+ ```
165
+
166
+ ## Quickstart
167
+
168
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
169
+
170
+ ```python
171
+ from transformers import AutoModelForCausalLM, AutoTokenizer
172
+
173
+ model_name = "Qwen/Qwen2.5-0.5B-Instruct"
174
+
175
+ model = AutoModelForCausalLM.from_pretrained(
176
+ model_name,
177
+ torch_dtype="auto",
178
+ device_map="auto"
179
+ )
180
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
181
+
182
+ prompt = "Give me a short introduction to large language model."
183
+ messages = [
184
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
185
+ {"role": "user", "content": prompt}
186
+ ]
187
+ text = tokenizer.apply_chat_template(
188
+ messages,
189
+ tokenize=False,
190
+ add_generation_prompt=True
191
+ )
192
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
193
+
194
+ generated_ids = model.generate(
195
+ **model_inputs,
196
+ max_new_tokens=512
197
+ )
198
+ generated_ids = [
199
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
200
+ ]
201
+
202
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
203
+ ```
204
+
205
+
206
+ ## Evaluation & Performance
207
+
208
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
209
+
210
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
211
+
212
+ ## Citation
213
+
214
+ If you find our work helpful, feel free to give us a cite.
215
+
216
+ ```
217
+ @misc{qwen2.5,
218
+ title = {Qwen2.5: A Party of Foundation Models},
219
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
220
+ author = {Qwen Team},
221
+ month = {September},
222
+ year = {2024}
223
+ }
224
+
225
+ @article{qwen2,
226
+ title={Qwen2 Technical Report},
227
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
228
+ journal={arXiv preprint arXiv:2407.10671},
229
+ year={2024}
230
+ }
231
+ ```