Model Information

Quantized version of meta-llama/Llama-3.2-1B-Instruct using torch.float32 for quantization tuning.

  • 4 bits (INT4)
  • group size = 128
  • Asymmetrical Quantization
  • Algorith method: TEQ (Trainable Equivalent Transformation for Quantization of LLMs)

Quantization framework: Intel Neural Compressor version 3.3.1

Note: this INT4 version of Llama-3.2-1B-Instruct has been quantized to run inference through CPU.

Disclaimer

This quantized model comes with no warrenty. It has been developed experimetally only for research purposes.

This repository only contains contains two files: quantized_model.pt (weights structure) and qconfig.json, and the generated model is a quantized model. It needs to be used in combination with the base model meta-llama/Llama-3.2-1B-Instruct.

Replication Recipe

$ conda create --name neural-compressor-3.3.1 --file requirements_conda_neural-compressor-3.3.1

$ python meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-asym.py

Run Inference

To run inference you can use fbaldassarri/woq-inference.

python teq_inference.py --base meta-llama/Llama-3.2-1B-Instruct --model_dir ./meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-asym --weights_file quantized_weight.pt --config_file qconfig.json --prompt "What If you have got superpowers?" --device cpu

Note: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

License

Llama 3.2 Community License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fbaldassarri/meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-asym

Finetuned
(482)
this model

Collection including fbaldassarri/meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-asym