danielhanchen commited on
Commit
80ed193
·
verified ·
1 Parent(s): 8e44d76

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen3-32B-UD-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen3-32B-UD-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen3-32B-UD-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen3-32B-UD-Q2_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen3-32B-UD-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen3-32B-UD-Q4_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
Qwen3-32B-UD-IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ea869b0ac14f7ea9e9f12ddbace3caa61fcf55d40fca3038a7bf2479f427267
3
+ size 8405400448
Qwen3-32B-UD-IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b98eac03a42a1b3823824296523f3200324bf9c3c38cb5f416919702fc2f65a6
3
+ size 7842200448
Qwen3-32B-UD-IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9454b161c5928415160984ef9f0b97ad9a1d698065c8a33da21e7641ec727b46
3
+ size 11643370368
Qwen3-32B-UD-IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe3cc52867fde01c9b203f47e457f1b69090b9e2c871c986070860bf435c95f4
3
+ size 13070867328
Qwen3-32B-UD-Q2_K_XL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c55e7641adcbfd395516fdccab6a821c01caaf29100a69c41cf1a9836651a3
3
+ size 12797351808
Qwen3-32B-UD-Q4_K_XL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4583c57987735066b9af1666e33c5354c6ab1e87f82bdc9f4a4fc21461e69c0b
3
+ size 20021712768
README.md ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-32B
6
+ ---
7
+ # Qwen3-32B
8
+
9
+ ## Qwen3 Highlights
10
+
11
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
12
+
13
+ - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
14
+ - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
15
+ - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
16
+ - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
17
+ - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
18
+
19
+ ## Model Overview
20
+
21
+ **Qwen3-32B** has the following features:
22
+ - Type: Causal Language Models
23
+ - Training Stage: Pretraining & Post-training
24
+ - Number of Parameters: 32.8B
25
+ - Number of Paramaters (Non-Embedding): 31.2B
26
+ - Number of Layers: 64
27
+ - Number of Attention Heads (GQA): 64 for Q and 8 for KV
28
+ - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
29
+
30
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
31
+
32
+ ## Quickstart
33
+
34
+ The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
35
+
36
+ With `transformers<4.51.0`, you will encounter the following error:
37
+ ```
38
+ KeyError: 'qwen3'
39
+ ```
40
+
41
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
42
+ ```python
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+
45
+ model_name = "Qwen/Qwen3-32B"
46
+
47
+ # load the tokenizer and the model
48
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
49
+ model = AutoModelForCausalLM.from_pretrained(
50
+ model_name,
51
+ torch_dtype="auto",
52
+ device_map="auto"
53
+ )
54
+
55
+ # prepare the model input
56
+ prompt = "Give me a short introduction to large language model."
57
+ messages = [
58
+ {"role": "user", "content": prompt}
59
+ ]
60
+ text = tokenizer.apply_chat_template(
61
+ messages,
62
+ tokenize=False,
63
+ add_generation_prompt=True,
64
+ enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
65
+ )
66
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
67
+
68
+ # conduct text completion
69
+ generated_ids = model.generate(
70
+ **model_inputs,
71
+ max_new_tokens=32768
72
+ )
73
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
74
+
75
+ # parsing thinking content
76
+ try:
77
+ # rindex finding 151668 (</think>)
78
+ index = len(output_ids) - output_ids[::-1].index(151668)
79
+ except ValueError:
80
+ index = 0
81
+
82
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
83
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
84
+
85
+ print("thinking content:", thinking_content)
86
+ print("content:", content)
87
+ ```
88
+
89
+ For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
90
+ - vLLM:
91
+ ```shell
92
+ vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
93
+ ```
94
+ - SGLang:
95
+ ```shell
96
+ python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser deepseek-r1
97
+ ```
98
+
99
+ ## Switching Between Thinking and Non-Thinking Mode
100
+
101
+ > [!TIP]
102
+ > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
103
+ > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
104
+
105
+ ### `enable_thinking=True`
106
+
107
+ By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
108
+
109
+ ```python
110
+ text = tokenizer.apply_chat_template(
111
+ messages,
112
+ tokenize=False,
113
+ add_generation_prompt=True,
114
+ enable_thinking=True # True is the default value for enable_thinking
115
+ )
116
+ ```
117
+
118
+ In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
119
+
120
+ > [!NOTE]
121
+ > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
122
+
123
+
124
+ ### `enable_thinking=False`
125
+
126
+ We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
127
+
128
+ ```python
129
+ text = tokenizer.apply_chat_template(
130
+ messages,
131
+ tokenize=False,
132
+ add_generation_prompt=True,
133
+ enable_thinking=False # Setting enable_thinking=False disables thinking mode
134
+ )
135
+ ```
136
+
137
+ In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
138
+
139
+ > [!NOTE]
140
+ > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
141
+
142
+ ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
143
+
144
+ We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
145
+
146
+ Here is an example of a multi-turn conversation:
147
+
148
+ ```python
149
+ from transformers import AutoModelForCausalLM, AutoTokenizer
150
+
151
+ class QwenChatbot:
152
+ def __init__(self, model_name="Qwen/Qwen3-32B"):
153
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name)
154
+ self.model = AutoModelForCausalLM.from_pretrained(model_name)
155
+ self.history = []
156
+
157
+ def generate_response(self, user_input):
158
+ messages = self.history + [{"role": "user", "content": user_input}]
159
+
160
+ text = self.tokenizer.apply_chat_template(
161
+ messages,
162
+ tokenize=False,
163
+ add_generation_prompt=True
164
+ )
165
+
166
+ inputs = self.tokenizer(text, return_tensors="pt")
167
+ response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
168
+ response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
169
+
170
+ # Update history
171
+ self.history.append({"role": "user", "content": user_input})
172
+ self.history.append({"role": "assistant", "content": response})
173
+
174
+ return response
175
+
176
+ # Example Usage
177
+ if __name__ == "__main__":
178
+ chatbot = QwenChatbot()
179
+
180
+ # First input (without /think or /no_think tags, thinking mode is enabled by default)
181
+ user_input_1 = "How many r's in strawberries?"
182
+ print(f"User: {user_input_1}")
183
+ response_1 = chatbot.generate_response(user_input_1)
184
+ print(f"Bot: {response_1}")
185
+ print("----------------------")
186
+
187
+ # Second input with /no_think
188
+ user_input_2 = "Then, how many r's in blueberries? /no_think"
189
+ print(f"User: {user_input_2}")
190
+ response_2 = chatbot.generate_response(user_input_2)
191
+ print(f"Bot: {response_2}")
192
+ print("----------------------")
193
+
194
+ # Third input with /think
195
+ user_input_3 = "Really? /think"
196
+ print(f"User: {user_input_3}")
197
+ response_3 = chatbot.generate_response(user_input_3)
198
+ print(f"Bot: {response_3}")
199
+ ```
200
+
201
+ > **Note**
202
+ > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
203
+ > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
204
+
205
+ ## Agentic Use
206
+
207
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
208
+
209
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
210
+ ```python
211
+ from qwen_agent.agents import Assistant
212
+
213
+ # Define LLM
214
+ llm_cfg = {
215
+ 'model': 'Qwen3-32B',
216
+
217
+ # Use the endpoint provided by Alibaba Model Studio:
218
+ # 'model_type': 'qwen_dashscope',
219
+ # 'api_key': os.getenv('DASHSCOPE_API_KEY'),
220
+
221
+ # Use a custom endpoint compatible with OpenAI API:
222
+ 'model_server': 'http://localhost:8000/v1', # api_base
223
+ 'api_key': 'EMPTY',
224
+
225
+ # Other parameters:
226
+ # 'generate_cfg': {
227
+ # # Add: When the response content is `<think>this is the thought</think>this is the answer;
228
+ # # Do not add: When the response has been separated by reasoning_content and content.
229
+ # 'thought_in_content': True,
230
+ # },
231
+ }
232
+
233
+ # Define Tools
234
+ tools = [
235
+ {'mcpServers': { # You can specify the MCP configuration file
236
+ 'time': {
237
+ 'command': 'uvx',
238
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
239
+ },
240
+ "fetch": {
241
+ "command": "uvx",
242
+ "args": ["mcp-server-fetch"]
243
+ }
244
+ }
245
+ },
246
+ 'code_interpreter', # Built-in tools
247
+ ]
248
+
249
+ # Define Agent
250
+ bot = Assistant(llm=llm_cfg, function_list=tools)
251
+
252
+ # Streaming generation
253
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
254
+ for responses in bot.run(messages=messages):
255
+ pass
256
+ print(responses)
257
+ ```
258
+
259
+ ## Processing Long Texts
260
+
261
+ Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
262
+
263
+ YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
264
+
265
+ - Modifying the model files:
266
+ In the `config.json` file, add the `rope_scaling` fields:
267
+ ```json
268
+ {
269
+ ...,
270
+ "rope_scaling": {
271
+ "type": "yarn",
272
+ "factor": 4.0,
273
+ "original_max_position_embeddings": 32768
274
+ }
275
+ }
276
+ ```
277
+ For `llama.cpp`, you need to regenerate the GGUF file after the modification.
278
+
279
+ - Passing command line arguments:
280
+
281
+ For `vllm`, you can use
282
+ ```shell
283
+ vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
284
+ ```
285
+
286
+ For `sglang`, you can use
287
+ ```shell
288
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
289
+ ```
290
+
291
+ For `llama-server` from `llama.cpp`, you can use
292
+ ```shell
293
+ llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
294
+ ```
295
+
296
+ > [!IMPORTANT]
297
+ > If you encounter the following warning
298
+ > ```
299
+ > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
300
+ > ```
301
+ > please upgrade `transformers>=4.51.0`.
302
+
303
+ > [!NOTE]
304
+ > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
305
+ > We advise adding the `rope_scaling` configuration only when processing long contexts is required.
306
+ > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
307
+
308
+ > [!NOTE]
309
+ > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
310
+
311
+ > [!TIP]
312
+ > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
313
+
314
+ ## Best Practices
315
+
316
+ To achieve optimal performance, we recommend the following settings:
317
+
318
+ 1. **Sampling Parameters**:
319
+ - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
320
+ - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
321
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
322
+
323
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
324
+
325
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
326
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
327
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
328
+
329
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
330
+
331
+ ### Citation
332
+
333
+ If you find our work helpful, feel free to give us a cite.
334
+
335
+ ```
336
+ @misc{qwen3,
337
+ title = {Qwen3},
338
+ url = {https://qwenlm.github.io/blog/qwen3/},
339
+ author = {Qwen Team},
340
+ month = {April},
341
+ year = {2025}
342
+ }
343
+ ```
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "eos_token_id": 151645,
8
+ "head_dim": 128,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 5120,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 25600,
13
+ "max_position_embeddings": 40960,
14
+ "max_window_layers": 64,
15
+ "model_type": "qwen3",
16
+ "num_attention_heads": 64,
17
+ "num_hidden_layers": 64,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 151654,
20
+ "rms_norm_eps": 1e-06,
21
+ "rope_scaling": null,
22
+ "rope_theta": 1000000,
23
+ "sliding_window": null,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.52.0.dev0",
27
+ "unsloth_fixed": true,
28
+ "use_cache": true,
29
+ "use_sliding_window": false,
30
+ "vocab_size": 151936
31
+ }