|
--- |
|
language: |
|
- sr |
|
license: mit |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- mistral |
|
- gguf |
|
base_model: datatab/Yugo55A-GPT |
|
datasets: |
|
- datatab/ultrafeedback_binarized |
|
- datatab/open-orca-slim-serbian |
|
- datatab/alpaca-cleaned-serbian-full |
|
--- |
|
|
|
# Yugo55A-GPT.GGUF |
|
|
|
- **Developed by:** datatab |
|
- **License:** MIT |
|
- **Quantized from model :** [datatab/Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT) |
|
|
|
|
|
### Full Weights Model |
|
> [datatab/Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT). |
|
|
|
|
|
|
|
## 🏆 Results |
|
> Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: [serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval) |
|
> * Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints. |
|
|
|
<table> |
|
<tr> |
|
<th>MODEL</th> |
|
<th>ARC-E</th> |
|
<th>ARC-C</th> |
|
<th>Hellaswag</th> |
|
<th>BoolQ</th> |
|
<th>Winogrande</th> |
|
<th>OpenbookQA</th> |
|
<th>PiQA</th> |
|
</tr> |
|
<tr> |
|
<td><a href="https://huggingface.co/datatab/Yugo55-GPT-v4-4bit/">*Yugo55-GPT-v4-4bit</a></td> |
|
<td>51.41</td> |
|
<td>36.00</td> |
|
<td>57.51</td> |
|
<td>80.92</td> |
|
<td><strong>65.75</strong></td> |
|
<td>34.70</td> |
|
<td><strong>70.54</strong></td> |
|
</tr> |
|
<tr> |
|
<td><a href="https://huggingface.co/datatab/Yugo55A-GPT/">Yugo55A-GPT</a></td> |
|
<td><strong>51.52</strong></td> |
|
<td><strong>37.78</strong></td> |
|
<td><strong>57.52</strong></td> |
|
<td><strong>84.40</strong></td> |
|
<td>65.43</td> |
|
<td><strong>35.60</strong></td> |
|
<td>69.43</td> |
|
</tr> |
|
</table> |
|
|
|
# Quant. preference |
|
|
|
| Quant. | Description | |
|
|---------------|---------------------------------------------------------------------------------------| |
|
| not_quantized | Recommended. Fast conversion. Slow inference, big files. | |
|
| fast_quantized| Recommended. Fast conversion. OK inference, OK file size. | |
|
| quantized | Recommended. Slow conversion. Fast inference, small files. | |
|
| f32 | Not recommended. Retains 100% accuracy, but super slow and memory hungry. | |
|
| f16 | Fastest conversion + retains 100% accuracy. Slow and memory hungry. | |
|
| q8_0 | Fast conversion. High resource use, but generally acceptable. | |
|
| q4_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K | |
|
| q5_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K | |
|
| q2_k | Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.| |
|
| q3_k_l | Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K | |
|
| q3_k_m | Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K | |
|
| q3_k_s | Uses Q3_K for all tensors | |
|
| q4_0 | Original quant method, 4-bit. | |
|
| q4_1 | Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.| |
|
| q4_k_s | Uses Q4_K for all tensors | |
|
| q4_k | alias for q4_k_m | |
|
| q5_k | alias for q5_k_m | |
|
| q5_0 | Higher accuracy, higher resource usage and slower inference. | |
|
| q5_1 | Even higher accuracy, resource usage and slower inference. | |
|
| q5_k_s | Uses Q5_K for all tensors | |
|
| q6_k | Uses Q8_K for all tensors | |
|
| iq2_xxs | 2.06 bpw quantization | |
|
| iq2_xs | 2.31 bpw quantization | |
|
| iq3_xxs | 3.06 bpw quantization | |
|
| q3_k_xs | 3-bit extra small quantization | |
|
|