yangapku commited on
Commit
b192946
·
verified ·
1 Parent(s): 01b0452

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -29
README.md CHANGED
@@ -92,7 +92,7 @@ print("thinking content:", thinking_content)
92
  print("content:", content)
93
  ```
94
 
95
- For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
96
  - SGLang:
97
  ```shell
98
  python -m sglang.launch_server --model-path Qwen/Qwen3-14B-FP8 --reasoning-parser qwen3
@@ -102,39 +102,16 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
102
  vllm serve Qwen/Qwen3-14B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
103
  ```
104
 
105
- For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
106
 
107
  ## Note on FP8
108
 
109
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
110
 
111
- You can use the Qwen3-14B-FP8 model with serveral inference frameworks, including `transformers`, `vllm`, and `sglang`, as the original bfloat16 model.
112
  However, please pay attention to the following known issues:
113
  - `transformers`:
114
  - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
115
- - vLLM:
116
- - there are currently compatibility issues with `vllm`. For a quick fix, you should make the following changes to `vllm/vllm/model_executor/layers/linear.py`:
117
- ```python
118
- # these changes are in QKVParallelLinear.weight_loader_v2() of vllm/vllm/model_executor/layers/linear.py
119
- ...
120
- shard_offset = self._get_shard_offset_mapping(loaded_shard_id)
121
- shard_size = self._get_shard_size_mapping(loaded_shard_id)
122
-
123
- # add the following code
124
- if isinstance(param, BlockQuantScaleParameter):
125
- weight_block_size = self.quant_method.quant_config.weight_block_size
126
- block_n, _ = weight_block_size[0], weight_block_size[1]
127
- shard_offset = (shard_offset + block_n - 1) // block_n
128
- shard_size = (shard_size + block_n - 1) // block_n
129
- # end of the modification
130
-
131
- param.load_qkv_weight(loaded_weight=loaded_weight,
132
- num_heads=self.num_kv_head_replicas,
133
- shard_id=loaded_shard_id,
134
- shard_offset=shard_offset,
135
- shard_size=shard_size)
136
- ...
137
- ```
138
 
139
  ## Switching Between Thinking and Non-Thinking Mode
140
 
@@ -308,7 +285,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
308
  {
309
  ...,
310
  "rope_scaling": {
311
- "type": "yarn",
312
  "factor": 4.0,
313
  "original_max_position_embeddings": 32768
314
  }
@@ -320,12 +297,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
320
 
321
  For `vllm`, you can use
322
  ```shell
323
- vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
324
  ```
325
 
326
  For `sglang`, you can use
327
  ```shell
328
- python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
329
  ```
330
 
331
  For `llama-server` from `llama.cpp`, you can use
 
92
  print("content:", content)
93
  ```
94
 
95
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
96
  - SGLang:
97
  ```shell
98
  python -m sglang.launch_server --model-path Qwen/Qwen3-14B-FP8 --reasoning-parser qwen3
 
102
  vllm serve Qwen/Qwen3-14B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
103
  ```
104
 
105
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
106
 
107
  ## Note on FP8
108
 
109
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
110
 
111
+ You can use the Qwen3-14B-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
112
  However, please pay attention to the following known issues:
113
  - `transformers`:
114
  - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
 
116
  ## Switching Between Thinking and Non-Thinking Mode
117
 
 
285
  {
286
  ...,
287
  "rope_scaling": {
288
+ "rope_type": "yarn",
289
  "factor": 4.0,
290
  "original_max_position_embeddings": 32768
291
  }
 
297
 
298
  For `vllm`, you can use
299
  ```shell
300
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
301
  ```
302
 
303
  For `sglang`, you can use
304
  ```shell
305
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
306
  ```
307
 
308
  For `llama-server` from `llama.cpp`, you can use