Brianpu commited on
Commit
3d29553
·
verified ·
1 Parent(s): 9a5afef

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2-0.5B-Instruct
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - chat
9
+ - llama-cpp
10
+ - gguf-my-repo
11
+ ---
12
+
13
+ *Produced by [Antigma Labs](https://antigma.ai)*
14
+ ## llama.cpp quantization
15
+ Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5165">b4944</a> for quantization.
16
+ Original model: https://huggingface.co/Qwen/Qwen2-0.5B-Instruct
17
+ Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
18
+ ## Prompt format
19
+ ```
20
+ <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
21
+ ```
22
+ ## Download a file (not the whole branch) from below:
23
+ | Filename | Quant type | File Size | Split |
24
+ | -------- | ---------- | --------- | ----- |
25
+ | [qwen2-0.5b-instruct-q4_k_m.gguf](https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF/blob/main/qwen2-0.5b-instruct-q4_k_m.gguf)|Q4_K_M|0.37 GB|False|
26
+
27
+ ## Downloading using huggingface-cli
28
+ <details>
29
+ <summary>Click to view download instructions</summary>
30
+ First, make sure you have hugginface-cli installed:
31
+ ```
32
+ pip install -U "huggingface_hub[cli]"
33
+
34
+ ```
35
+ Then, you can target the specific file you want:
36
+
37
+ ```
38
+ huggingface-cli download https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --include "qwen2-0.5b-instruct-q4_k_m.gguf" --local-dir ./
39
+
40
+ ```
41
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
42
+
43
+ ```
44
+ huggingface-cli download https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --include "qwen2-0.5b-instruct-q4_k_m.gguf/*" --local-dir ./
45
+
46
+ ```
47
+ You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
48
+
49
+ </details>