File size: 4,480 Bytes
127c3f7 133056a 6f8090b 127c3f7 133056a 127c3f7 69c141e 127c3f7 33af781 e978f98 504670e 9a20e6f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
language:
- sr
license: mit
tags:
- text-generation-inference
- transformers
- mistral
- gguf
base_model: datatab/Yugo55A-GPT
datasets:
- datatab/ultrafeedback_binarized
- datatab/open-orca-slim-serbian
- datatab/alpaca-cleaned-serbian-full
---
# Yugo55A-GPT.GGUF
- **Developed by:** datatab
- **License:** MIT
- **Quantized from model :** [datatab/Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT)
### Full Weights Model
> [datatab/Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT).
## 🏆 Results
> Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: [serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval)
> * Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints.
<table>
<tr>
<th>MODEL</th>
<th>ARC-E</th>
<th>ARC-C</th>
<th>Hellaswag</th>
<th>BoolQ</th>
<th>Winogrande</th>
<th>OpenbookQA</th>
<th>PiQA</th>
</tr>
<tr>
<td><a href="https://huggingface.co/datatab/Yugo55-GPT-v4-4bit/">*Yugo55-GPT-v4-4bit</a></td>
<td>51.41</td>
<td>36.00</td>
<td>57.51</td>
<td>80.92</td>
<td><strong>65.75</strong></td>
<td>34.70</td>
<td><strong>70.54</strong></td>
</tr>
<tr>
<td><a href="https://huggingface.co/datatab/Yugo55A-GPT/">Yugo55A-GPT</a></td>
<td><strong>51.52</strong></td>
<td><strong>37.78</strong></td>
<td><strong>57.52</strong></td>
<td><strong>84.40</strong></td>
<td>65.43</td>
<td><strong>35.60</strong></td>
<td>69.43</td>
</tr>
</table>
# Quant. preference
| Quant. | Description |
|---------------|---------------------------------------------------------------------------------------|
| not_quantized | Recommended. Fast conversion. Slow inference, big files. |
| fast_quantized| Recommended. Fast conversion. OK inference, OK file size. |
| quantized | Recommended. Slow conversion. Fast inference, small files. |
| f32 | Not recommended. Retains 100% accuracy, but super slow and memory hungry. |
| f16 | Fastest conversion + retains 100% accuracy. Slow and memory hungry. |
| q8_0 | Fast conversion. High resource use, but generally acceptable. |
| q4_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K |
| q5_k_m | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K |
| q2_k | Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.|
| q3_k_l | Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K |
| q3_k_m | Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K |
| q3_k_s | Uses Q3_K for all tensors |
| q4_0 | Original quant method, 4-bit. |
| q4_1 | Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
| q4_k_s | Uses Q4_K for all tensors |
| q4_k | alias for q4_k_m |
| q5_k | alias for q5_k_m |
| q5_0 | Higher accuracy, higher resource usage and slower inference. |
| q5_1 | Even higher accuracy, resource usage and slower inference. |
| q5_k_s | Uses Q5_K for all tensors |
| q6_k | Uses Q8_K for all tensors |
| iq2_xxs | 2.06 bpw quantization |
| iq2_xs | 2.31 bpw quantization |
| iq3_xxs | 3.06 bpw quantization |
| q3_k_xs | 3-bit extra small quantization |
|