Update README.md
Browse files
README.md
CHANGED
@@ -92,7 +92,7 @@ print("thinking content:", thinking_content)
|
|
92 |
print("content:", content)
|
93 |
```
|
94 |
|
95 |
-
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.
|
96 |
- SGLang:
|
97 |
```shell
|
98 |
python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3
|
@@ -102,7 +102,7 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
|
|
102 |
vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1
|
103 |
```
|
104 |
|
105 |
-
For local use, applications such as
|
106 |
|
107 |
## Switching Between Thinking and Non-Thinking Mode
|
108 |
|
@@ -276,7 +276,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
|
|
276 |
{
|
277 |
...,
|
278 |
"rope_scaling": {
|
279 |
-
"
|
280 |
"factor": 4.0,
|
281 |
"original_max_position_embeddings": 32768
|
282 |
}
|
@@ -288,12 +288,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
|
|
288 |
|
289 |
For `vllm`, you can use
|
290 |
```shell
|
291 |
-
vllm serve ... --rope-scaling '{"
|
292 |
```
|
293 |
|
294 |
For `sglang`, you can use
|
295 |
```shell
|
296 |
-
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"
|
297 |
```
|
298 |
|
299 |
For `llama-server` from `llama.cpp`, you can use
|
|
|
92 |
print("content:", content)
|
93 |
```
|
94 |
|
95 |
+
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
96 |
- SGLang:
|
97 |
```shell
|
98 |
python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3
|
|
|
102 |
vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1
|
103 |
```
|
104 |
|
105 |
+
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
106 |
|
107 |
## Switching Between Thinking and Non-Thinking Mode
|
108 |
|
|
|
276 |
{
|
277 |
...,
|
278 |
"rope_scaling": {
|
279 |
+
"rope_type": "yarn",
|
280 |
"factor": 4.0,
|
281 |
"original_max_position_embeddings": 32768
|
282 |
}
|
|
|
288 |
|
289 |
For `vllm`, you can use
|
290 |
```shell
|
291 |
+
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
|
292 |
```
|
293 |
|
294 |
For `sglang`, you can use
|
295 |
```shell
|
296 |
+
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
|
297 |
```
|
298 |
|
299 |
For `llama-server` from `llama.cpp`, you can use
|