Contribute

Transformers supports many quantization methods such as QLoRA, GPTQ, LLM.int8, and AWQ. However, there are still many more quantization approaches that haven’t been integrated yet. To make adding and using these quantization methods with Transformers easier, use the HfQuantizer class. HfQuantizer is designed to be an internal helper class for adding a quantization method instead of something applied to every PyTorch module.

This guide will show you how to integrate a new quantization method with HfQuantizer.

Requirements

Before integrating a new quantization method into Transformers, ensure the method meets the following requirements. Only quantization methods that can be run with PyTorch modules are supported.

Some quantization methods may require “pre-quantizing” the model through data calibration (AWQ). In this case, we prefer to only support inference in Transformers and let the third-party library maintained by the ML community deal handle the model quantization itself.

Create new HFQuantizer class

  1. Create a new quantization config class inside src/transformers/utils/quantization_config.py. Add the new quantization config to the _import_structure inside Transformers’ src/transformers/init.py file.

  2. Create a new file inside src/transformers/quantizers/ named quantizer_your_method.py, and make it inherit from [`~quantizers.HfQuantizer]. Make sure to add the new quantizer and quantization config in the quantization auto-mapping in src/transformers/quantizers/auto.py.

  3. Define the following class attributes and property methods for your quantization method.

  4. Write the validate_environment and update_torch_dtype methods. These methods are called before creating the quantized model to ensure users use the right configuration. Refer to other quantizers for an example of it is implemented.

  5. Write the _process_model_before_weight_loading method. In Transformers, the quantized models are initialized first on the "meta" device before loading the weights. This means the _process_model_before_weight_loading method takes care of manipulating the model skeleton to replace some modules (nn.Linear) with the target modules (quantization modules).

    You can define module replacement logic or any other utility method by creating a new file in transformers/src/integrations/ and exposing the relevant methods in that folder’s __init__.py file. The best starting point would be to have a look at another quantization method such as quantizer_awq.py.

  6. Write the _process_model_after_weight_loading method. This method enables implementing additional features that require manipulating the model after loading the weights.

  7. Document everything! Make sure your quantization method is documented by adding a new file under docs/source/en/quantization.

  8. You should add tests by adding the package in our nightly Dockerfile inside docker/transformers-quantization-latest-gpu and then adding a new test file in tests/quantization/xxx. Feel free to check out existing quantization methods to see how it is implemented.

< > Update on GitHub