metadata
base_model: Qwen/Qwen1.5-0.5B
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- pretrained
- llama-cpp
- gguf-my-repo
Produced by Antigma Labs
llama.cpp quantization
Using llama.cpp release b4944 for quantization. Original model: https://huggingface.co/Qwen/Qwen1.5-0.5B Run them directly with llama.cpp, or any other llama.cpp based project
Prompt format
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
Download a file (not the whole branch) from below:
Filename | Quant type | File Size | Split |
---|---|---|---|
qwen1.5-0.5b-q4_k_m.gguf | Q4_K_M | 0.38 GB | False |
Downloading using huggingface-cli
Click to view download instructions
First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]"Then, you can target the specific file you want:
huggingface-cli download https://huggingface.co/Brianpu/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf" --local-dir ./
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download https://huggingface.co/Brianpu/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf/*" --local-dir ./
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
</details>