Bojun-Feng commited on
Commit
c757449
·
verified ·
1 Parent(s): c9ea779

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +205 -0
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ base_model:
7
+ - Qwen/Qwen2.5-Coder-32B-Instruct
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ tags:
11
+ - code
12
+ - codeqwen
13
+ - chat
14
+ - qwen
15
+ - qwen-coder
16
+ ---
17
+ <!-- markdownlint-disable MD041 -->
18
+
19
+ <!-- header start -->
20
+ <!-- 200823 -->
21
+ <div style="width: auto; margin-left: auto; margin-right: auto">
22
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64a523ba1ed90082dafde3d3/kJrkxofwOp-89uYFe0EBb.png" alt="LlamaFile" style="width: 50%; min-width: 400px; display: block; margin: auto;">
23
+
24
+ <!-- markdownlint-disable MD041 -->
25
+
26
+ <!-- header start -->
27
+ <!-- 200823 -->
28
+
29
+ I am not the original creator of llamafile, all credit of llamafile goes to Jartine:
30
+ <!-- README_llamafile.md-about-llamafile end -->
31
+ <!-- repositories-available start -->
32
+ <div style="width: auto; margin-left: auto; margin-right: auto">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
34
+ </div>
35
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
36
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
37
+ <!-- header end -->
38
+
39
+ # Qwen2.5 Coder 32B Instruct GGUF - llamafile
40
+
41
+ ## Run LLMs locally with a single file - No installation required!
42
+
43
+ All you need is download a file and run it.
44
+
45
+ Our goal is to make open source large language models much more
46
+ accessible to both developers and end users. We're doing that by
47
+ combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one
48
+ framework that collapses all the complexity of LLMs down to
49
+ a single-file executable (called a "llamafile") that runs
50
+ locally on most computers, with no installation.
51
+
52
+ ## How to Use (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#quickstart))
53
+
54
+ The easiest way to try it for yourself is to download our example llamafile.
55
+ With llamafile, all inference happens locally; no data ever leaves your computer.
56
+
57
+ 1. Download the llamafile.
58
+
59
+ 2. Open your computer's terminal.
60
+
61
+ 3. If you're using macOS, Linux, or BSD, you'll need to grant permission
62
+ for your computer to execute this new file. (You only need to do this
63
+ once.)
64
+
65
+ ```sh
66
+ chmod +x qwen2.5-coder-32b-instruct-q8_0.gguf
67
+ ```
68
+
69
+ 4. If you're on Windows, rename the file by adding ".exe" on the end.
70
+
71
+ 5. Run the llamafile. e.g.:
72
+
73
+ ```sh
74
+ ./qwen2.5-coder-32b-instruct-q8_0.gguf
75
+ ```
76
+
77
+ 6. Your browser should open automatically and display a chat interface.
78
+ (If it doesn't, just open your browser and point it at http://localhost:8080.)
79
+
80
+ 7. When you're done chatting, return to your terminal and hit
81
+ `Control-C` to shut down llamafile.
82
+
83
+ Note: Hugging Face has a 50GB file upload Limit, so you may need to use the `cat` instruction to concatenate large llamafiles to run them.
84
+
85
+ Here is an example doing so to `Mozilla/Meta-Llama-3.1-405B-Instruct-llamafile`:
86
+ ```
87
+ wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat0.llamafile
88
+ wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat1.llamafile
89
+ wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat2.llamafile
90
+ wget https://huggingface.co/Mozilla/Meta-Llama-3.1-405B-llamafile/resolve/main/Meta-Llama-3.1-405B.Q2_K.cat3.llamafile
91
+ cat Meta-Llama-3.1-405B.Q2_K.cat{0,1,2,3}.llamafile >Meta-Llama-3.1-405B.Q2_K.llamafile
92
+ rm Meta-Llama-3.1-405B.Q2_K.cat*.llamafile
93
+ chmod +x Meta-Llama-3.1-405B.Q2_K.llamafile
94
+ ./Meta-Llama-3.1-405B.Q2_K.llamafile
95
+ ```
96
+
97
+
98
+ Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
99
+
100
+ ## Settings for Qwen2.5 Coder 32B Instruct GGUF Llamafiles
101
+
102
+ - Model creator: [Qwen](https://huggingface.co/Qwen)
103
+ - Quantized GGUF files used: [Qwen/Qwen2.5-Coder-32B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-GGUF/tree/9d3053fce650fe1cdbdb75998c2a87add9d178ef)
104
+ - Commit message "Update README.md"
105
+ - Commit hash 9d3053fce650fe1cdbdb75998c2a87add9d178ef
106
+ - LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/29b5f27172306da39a9c70fe25173da1b1564f82)
107
+ - Commit message "Merge pull request #687 from Xydane/main Add Support for DeepSeek-R1 models"
108
+ - Commit hash 29b5f27172306da39a9c70fe25173da1b1564f82
109
+ - `.args` content format (example):
110
+
111
+ ```
112
+ -m
113
+ qwen2.5-coder-32b-instruct-q8_0.gguf
114
+ ...
115
+ ```
116
+
117
+ ## (Following is original model card for Qwen2.5 Coder 32B Instruct GGUF)
118
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
119
+
120
+
121
+
122
+ # Qwen2.5-Coder-32B-Instruct-GGUF
123
+ <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
124
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
125
+ </a>
126
+
127
+ ## Introduction
128
+
129
+ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
130
+
131
+ - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
132
+ - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
133
+ - **Long-context Support** up to 128K tokens.
134
+
135
+ **This repo contains the instruction-tuned 32B Qwen2.5-Coder model in the GGUF Format**, which has the following features:
136
+ - Type: Causal Language Models
137
+ - Training Stage: Pretraining & Post-training
138
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
139
+ - Number of Parameters: 32.5B
140
+ - Number of Paramaters (Non-Embedding): 31.0B
141
+ - Number of Layers: 64
142
+ - Number of Attention Heads (GQA): 40 for Q and 8 for KV
143
+ - Context Length: Full 32,768 tokens
144
+ - Note: Currently, only vLLM supports YARN for length extrapolating. If you want to process sequences up to 131,072 tokens, please refer to non-GGUF models.
145
+ - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
146
+
147
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
148
+
149
+ ## Quickstart
150
+
151
+ Check out our [llama.cpp documentation](https://qwen.readthedocs.io/en/latest/run_locally/llama.cpp.html) for more usage guide.
152
+
153
+ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
154
+ In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
155
+
156
+ Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use `huggingface-cli`:
157
+ 1. Install
158
+ ```shell
159
+ pip install -U huggingface_hub
160
+ ```
161
+ 2. Download:
162
+ ```shell
163
+ huggingface-cli download Qwen/Qwen2.5-Coder-32B-Instruct-GGUF --include "qwen2.5-coder-32b-instruct-q5_k_m*.gguf" --local-dir . --local-dir-use-symlinks False
164
+ ```
165
+ For large files, we split them into multiple segments due to the limitation of file upload. They share a prefix, with a suffix indicating its index. For examples, `qwen2.5-coder-32b-instruct-q5_k_m-00001-of-00003.gguf`, `qwen2.5-coder-32b-instruct-q5_k_m-00002-of-00003.gguf` and `qwen2.5-coder-32b-instruct-q5_k_m-00003-of-00003.gguf`. The above command will download all of them.
166
+ 3. (Optional) Merge:
167
+ For split files, you need to merge them first with the command `llama-gguf-split` as shown below:
168
+ ```bash
169
+ # ./llama-gguf-split --merge <first-split-file-path> <merged-file-path>
170
+ ./llama-gguf-split --merge qwen2.5-coder-32b-instruct-q5_k_m-00001-of-00003.gguf qwen2.5-coder-32b-instruct-q5_k_m.gguf
171
+ ```
172
+
173
+ For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
174
+
175
+ ```shell
176
+ ./llama-cli -m <gguf-file-path> \
177
+ -co -cnv -p "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \
178
+ -fa -ngl 80 -n 512
179
+ ```
180
+
181
+
182
+ ## Evaluation & Performance
183
+
184
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
185
+
186
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
187
+
188
+ ## Citation
189
+
190
+ If you find our work helpful, feel free to give us a cite.
191
+
192
+ ```
193
+ @article{hui2024qwen2,
194
+ title={Qwen2. 5-Coder Technical Report},
195
+ author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
196
+ journal={arXiv preprint arXiv:2409.12186},
197
+ year={2024}
198
+ }
199
+ @article{qwen2,
200
+ title={Qwen2 Technical Report},
201
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
202
+ journal={arXiv preprint arXiv:2407.10671},
203
+ year={2024}
204
+ }
205
+ ```