Experimental layer-wise quantization of meta-llama/Llama-Guard-3-8B

Using LLaMA C++ release b5170 for quantization.

Original model: meta-llama/Llama-Guard-3-8B

From the original model creators:

Llama Guard 3 is a Llama-3.1-8B pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.

Llama Guard 3 was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities. Specifically, it provides content moderation in 8 languages, and was optimized to support safety and security for search and code interpreter tool calls.

PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!

An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, mobiles, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but my focus has been primarily on quantization and pruning.

The method used to produce these experimental versions is covered in Squeezing Tensor Bits: the quest for smaller LLMs, but at a high level it involves using custom versions of llama-imatrix and llama-quantize to identify the influential tensors, and quantize the most important layers to higher bit precision and the less important to lower bits. This process was partly inspired by Dumitru's et al Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels.

There’re two pull requests (imatrix & quantize) to merge these changes back into the core llama.cpp project. This may or may not ever happen so, until then, the modified versions will be available on GitHub.

For testing and comparison I'd normally use models produced by Unsloth (Daniel and Michael Han do some really advanced level stuff!) and Bartowski (see credits below), but they don't provide GGUF versions of this model, so all tests and comparisons are done against naive quantizations obtained by simply running llama-quantize with no further optimization.

All experimental versions were generated using an appropriate imatrix created from calibration datasets available at eaddario/imatrix-calibration. At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modeled, and it helps to counterbalance the negative effects of quantization and pruning.

The process to generate these models is roughly as follows:

  1. Convert the the original model's tensors to GGUF F16*
  2. Estimate the Perplexity score for the F16 model (baseline) using the wikitext-2-raw-v1 dataset, and save the logits
  3. Generate an imatrix from selected calibration datasets
  4. Determine tensor and layer Importance Score contribution using a modified version of llama-imatrix
  5. Select an appropiate quant level for each tensor using a modified version of llama-quantize
  6. Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande scores for each quantized model
  7. Keep versions with the best scores
  8. Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.

*BF16 would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16

Models

Sizes (in GB)

Model Naive Repo Shrinkage
Llama-Guard-3-8B-IQ3_M 3.78 3.69 2.4%
Llama-Guard-3-8B-IQ3_S 3.68 3.43 6.8%
Llama-Guard-3-8B-IQ4_NL 4.71 4.39 6.8%
Llama-Guard-3-8B-Q3_K_L 4.32 3.76 13.0%
Llama-Guard-3-8B-Q3_K_M 4.02 3.56 11.4%
Llama-Guard-3-8B-Q3_K_S 3.66 3.31 9.6%
Llama-Guard-3-8B-Q4_K_M 4.92 4.41 10.4%
Llama-Guard-3-8B-Q4_K_S 4.69 4.28 8.7%
Llama-Guard-3-8B-Q5_K_M 5.73 5.38 6.1%
Llama-Guard-3-8B-Q5_K_S 5.60 5.24 6.4%
Llama-Guard-3-8B-Q6_K 6.60 6.57 0.5%
Llama-Guard-3-8B-Q8_0 8.54 7.73 9.5%

Perplexity and KL Divergence scores

Model μPPL 𝜌PPL μKLD RMS Δp
Llama-Guard-3-8B-IQ3_M 7.423790 ±0.046691 97.11% 0.134115 ±0.000651 11.077 ±0.059
Llama-Guard-3-8B-IQ3_S 7.746531 ±0.048960 96.22% 0.179586 ±0.000744 12.616 ±0.060
Llama-Guard-3-8B-IQ4_NL 6.935864 ±0.042688 98.71% 0.059280 ±0.000325 7.170 ±0.046
Llama-Guard-3-8B-Q3_K_L 7.630634 ±0.047920 96.28% 0.165526 ±0.000769 12.135 ±0.061
Llama-Guard-3-8B-Q3_K_M 7.831542 ±0.049335 95.77% 0.188482 ±0.000852 12.979 ±0.062
Llama-Guard-3-8B-Q3_K_S 8.269311 ±0.052149 94.63% 0.239987 ±0.001029 14.794 ±0.066
Llama-Guard-3-8B-Q4_K_M 6.908041 ±0.042539 98.78% 0.055843 ±0.000320 7.016 ±0.048
Llama-Guard-3-8B-Q4_K_M (naive) 6.731828 ±0.041532 99.34% 0.030829 ±0.000214 5.255 ±0.045
Llama-Guard-3-8B-Q4_K_S 6.930856 ±0.042651 98.70% 0.059620 ±0.000336 7.285 ±0.049
Llama-Guard-3-8B-Q5_K_M 6.648795 ±0.040800 99.62% 0.017289 ±0.000115 3.870 ±0.034
Llama-Guard-3-8B-Q5_K_S 6.659786 ±0.040894 99.60% 0.018179 ±0.000120 3.957 ±0.034
Llama-Guard-3-8B-Q6_K 6.581335 ±0.040401 99.83% 0.007279 ±0.000061 2.532 ±0.028
Llama-Guard-3-8B-Q8_0 6.569465 ±0.040265 99.89% 0.004781 ±0.000042 2.072 ±0.025
Llama-Guard-3-8B-F16 6.554978 ±0.040159 100% N/A N/A

ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores

Scores generated using llama-perplexity with 750 tasks per test, and a context size of 768 tokens.

For the test data used in the generation of these scores, follow the appropiate links: HellaSwag, ARC, MMLU, Truthful QA and WinoGrande

Model ARC HellaSwag MMLU Truthful QA WinoGrande Avg Score
Llama-Guard-3-8B-IQ3_M 66.5333 ±1.7242 80.40 36.6667 ±1.7608 31.4667 ±1.6968 73.2000 ±1.6184 57.65
Llama-Guard-3-8B-IQ3_S 65.4667 ±1.7374 79.07 35.2000 ±1.7451 29.2000 ±1.6614 70.8000 ±1.6614 55.95
Llama-Guard-3-8B-IQ4_NL 64.9333 ±1.7436 79.60 36.8000 ±1.7621 30.5333 ±1.6828 73.2000 ±1.6184 57.01
Llama-Guard-3-8B-Q3_K_L 64.9333 ±1.7436 78.93 37.0667 ±1.7648 33.8667 ±1.7292 72.2667 ±1.6358 57.41
Llama-Guard-3-8B-Q3_K_M 63.6000 ±1.7581 78.67 36.6667 ±1.7608 33.4667 ±1.7242 70.6667 ±1.6636 56.61
Llama-Guard-3-8B-Q3_K_S 60.2667 ±1.7880 77.46 35.4667 ±1.7481 34.1333 ±1.7325 71.7333 ±1.6453 55.81
Llama-Guard-3-8B-Q4_K_M 65.6000 ±1.7358 80.26 38.1333 ±1.7748 30.4000 ±1.6807 72.2667 ±1.6358 57.33
Llama-Guard-3-8B-Q4_K_M (naive) 66.9786 ±1.7207 79.20 40.2667 ±1.7920 31.0559 ±2.5827 74.2667 ±1.5974 58.35
Llama-Guard-3-8B-Q4_K_S 66.1333 ±1.7292 80.00 37.8667 ±1.7724 30.4000 ±1.6807 71.6000 ±1.6477 57.20
Llama-Guard-3-8B-Q5_K_M 65.8667 ±1.7325 81.33 38.0000 ±1.7736 31.6000 ±1.6988 72.6667 ±1.6284 57.89
Llama-Guard-3-8B-Q5_K_S 65.7333 ±1.7342 81.33 37.4667 ±1.7686 31.8667 ±1.7026 72.9333 ±1.6235 57.87
Llama-Guard-3-8B-Q6_K 65.6000 ±1.7358 81.06 38.6667 ±1.7794 30.9333 ±1.6889 72.5333 ±1.6309 57.76
Llama-Guard-3-8B-Q8_0 65.3333 ±1.7389 81.60 38.4000 ±1.7771 30.8000 ±1.6869 72.8000 ±1.6260 57.79
Llama-Guard-3-8B-F16 64.9333 ±1.7436 81.60 38.2667 ±1.7759 30.6667 ±1.6849 72.8000 ±1.6260 57.65

Tokens per Second - Benchmarks

Scores generated using llama-bench. Naive (llama-quantize with no optimization) Q4_K_M quantization included for comparison.

model size params backend threads test t/s
Llama-Guard-3-8B-Q4_K_M 4.10 GiB 8.03 B Metal,BLAS 6 pp512 312.18 ± 0.88
Llama-Guard-3-8B-Q4_K_M 4.10 GiB 8.03 B Metal,BLAS 6 tg128 27.88 ± 0.03
Llama-Guard-3-8B-Q4_K_M 4.10 GiB 8.03 B Metal,BLAS 6 pp1024+tg1024 44.53 ± 0.11
Llama-Guard-3-8B-Q4_K_M (naive) 4.58 GiB 8.03 B Metal,BLAS 6 pp512 329.30 ± 0.12
Llama-Guard-3-8B-Q4_K_M (naive) 4.58 GiB 8.03 B Metal,BLAS 6 tg128 26.51 ± 0.02
Llama-Guard-3-8B-Q4_K_M (naive) 4.58 GiB 8.03 B Metal,BLAS 6 pp1024+tg1024 42.69 ± 1.00

Metrics used

Perplexity: one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of 1 indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.

Kullback–Leibler (KL) Divergence: a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the original model the better, thus the closest to 0 the better.

AI2 Reasoning Challenge (ARC): a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.

HellaSwag: the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.

MMLU: the Massive Multitask Language Understanding evaluates LLMs’ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.

Truthful QA: evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.

Winogrande: based on the Winograd Schema Challenge, is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.

Credits

A big Thank You! to Colin Kealty for the many contributions and for being one of the best sources of high quality quantized models available in Hugginface, and a really big Thank You! to Georgi Gerganov for his amazing work with llama.cpp and the ggml/gguf libraries.

Downloads last month
1,423
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for eaddario/Llama-Guard-3-8B-GGUF

Quantized
(15)
this model

Dataset used to train eaddario/Llama-Guard-3-8B-GGUF