Qwen
/

yangapku commited on
Commit
30b8421
·
verified ·
1 Parent(s): ba1f828

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -90,7 +90,7 @@ print("thinking content:", thinking_content)
90
  print("content:", content)
91
  ```
92
 
93
- For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
94
  - SGLang:
95
  ```shell
96
  python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
@@ -100,7 +100,7 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
100
  vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
101
  ```
102
 
103
- For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
104
 
105
  ## Switching Between Thinking and Non-Thinking Mode
106
 
@@ -274,7 +274,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
274
  {
275
  ...,
276
  "rope_scaling": {
277
- "type": "yarn",
278
  "factor": 4.0,
279
  "original_max_position_embeddings": 32768
280
  }
@@ -286,12 +286,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
286
 
287
  For `vllm`, you can use
288
  ```shell
289
- vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
290
  ```
291
 
292
  For `sglang`, you can use
293
  ```shell
294
- python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
295
  ```
296
 
297
  For `llama-server` from `llama.cpp`, you can use
 
90
  print("content:", content)
91
  ```
92
 
93
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
94
  - SGLang:
95
  ```shell
96
  python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
 
100
  vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
101
  ```
102
 
103
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
104
 
105
  ## Switching Between Thinking and Non-Thinking Mode
106
 
 
274
  {
275
  ...,
276
  "rope_scaling": {
277
+ "rope_type": "yarn",
278
  "factor": 4.0,
279
  "original_max_position_embeddings": 32768
280
  }
 
286
 
287
  For `vllm`, you can use
288
  ```shell
289
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
290
  ```
291
 
292
  For `sglang`, you can use
293
  ```shell
294
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
295
  ```
296
 
297
  For `llama-server` from `llama.cpp`, you can use