File size: 1,858 Bytes
3d29553 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
base_model: Qwen/Qwen2-0.5B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
*Produced by [Antigma Labs](https://antigma.ai)*
## llama.cpp quantization
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5165">b4944</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen2-0.5B-Instruct
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split |
| -------- | ---------- | --------- | ----- |
| [qwen2-0.5b-instruct-q4_k_m.gguf](https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF/blob/main/qwen2-0.5b-instruct-q4_k_m.gguf)|Q4_K_M|0.37 GB|False|
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --include "qwen2-0.5b-instruct-q4_k_m.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download https://huggingface.co/Brianpu/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --include "qwen2-0.5b-instruct-q4_k_m.gguf/*" --local-dir ./
```
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
</details>
|