Qwen
/

Text Generation
GGUF
conversational
feihu.hf commited on
Commit
530227a
·
1 Parent(s): c75e7b2

update README

Browse files
Files changed (2) hide show
  1. README.md +2 -2
  2. params +2 -1
README.md CHANGED
@@ -30,7 +30,7 @@ Qwen3 is the latest generation of large language models in Qwen series, offering
30
  - Number of Paramaters (Non-Embedding): 13.2B
31
  - Number of Layers: 40
32
  - Number of Attention Heads (GQA): 40 for Q and 8 for KV
33
- - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
34
 
35
  - Quantization: q4_K_M, q5_0, q5_K_M, q6_K, q8_0
36
 
@@ -46,7 +46,7 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and
46
  In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
47
 
48
  ```shell
49
- ./llama-cli -hf Qwen/Qwen3-14B:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 40960 -n 32768 --no-context-shift
50
  ```
51
 
52
  ### ollama
 
30
  - Number of Paramaters (Non-Embedding): 13.2B
31
  - Number of Layers: 40
32
  - Number of Attention Heads (GQA): 40 for Q and 8 for KV
33
+ - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
34
 
35
  - Quantization: q4_K_M, q5_0, q5_K_M, q6_K, q8_0
36
 
 
46
  In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
47
 
48
  ```shell
49
+ ./llama-cli -hf Qwen/Qwen3-14B-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 40960 -n 32768 --no-context-shift
50
  ```
51
 
52
  ### ollama
params CHANGED
@@ -9,5 +9,6 @@
9
  "presence_penalty" : 1.5,
10
  "top_k" : 20,
11
  "top_p" : 0.95,
12
- "num_predict" : 32768
 
13
  }
 
9
  "presence_penalty" : 1.5,
10
  "top_k" : 20,
11
  "top_p" : 0.95,
12
+ "num_predict" : 32768,
13
+ "num_ctx": 40960
14
  }