Model Information
Quantized version of meta-llama/Llama-3.2-1B-Instruct using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Algorith method: TEQ (Trainable Equivalent Transformation for Quantization of LLMs)
Quantization framework: Intel Neural Compressor version 3.3.1
Note: this INT4 version of Llama-3.2-1B-Instruct has been quantized to run inference through CPU.
Disclaimer
This quantized model comes with no warrenty. It has been developed experimetally only for research purposes.
This repository only contains contains two files: quantized_model.pt (weights structure) and qconfig.json, and the generated model is a quantized model. It needs to be used in combination with the base model meta-llama/Llama-3.2-1B-Instruct.
Replication Recipe
$ conda create --name neural-compressor-3.3.1 --file requirements_conda_neural-compressor-3.3.1
$ python meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym.py
Run Inference
To run inference you can use fbaldassarri/woq-inference.
python teq_inference.py --base meta-llama/Llama-3.2-1B-Instruct --model_dir ./meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym --weights_file quantized_weight.pt --config_file qconfig.json --prompt "What If you have got superpowers?" --device cpu
Note: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
License
Model tree for fbaldassarri/meta-llama_Llama-3.2-1B-Instruct-TEQ-int4-gs128-sym
Base model
meta-llama/Llama-3.2-1B-Instruct