Fine-tuning Quantized Neural Networks with Zeroth-order Optimization
Abstract
Quantized Zeroth-order Optimization (QZO) enables memory-efficient fine-tuning of large language models by minimizing memory usage on model weights, gradients, and optimizer states.
As the size of large language models grows exponentially, GPU memory has become a bottleneck for adapting these models to downstream tasks. In this paper, we aim to push the limits of memory-efficient training by minimizing memory usage on model weights, gradients, and optimizer states, within a unified framework. Our idea is to eliminate both gradients and optimizer states using zeroth-order optimization, which approximates gradients by perturbing weights during forward passes to identify gradient directions. To minimize memory usage on weights, we employ model quantization, e.g., converting from bfloat16 to int4. However, directly applying zeroth-order optimization to quantized weights is infeasible due to the precision gap between discrete weights and continuous gradients, which would otherwise require de-quantization and re-quantization. To overcome this challenge, we propose Quantized Zeroth-order Optimization (QZO), a novel approach that perturbs the continuous quantization scale for gradient estimation and uses a directional derivative clipping method to stabilize training. QZO is orthogonal to both scalar-based and codebook-based post-training quantization methods. Compared to full-parameter fine-tuning in bfloat16, QZO can reduce the total memory cost by more than 18times for 4-bit LLMs, and enables fine-tuning Llama-2-13B and Stable Diffusion 3.5 Large within a single 24GB GPU.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing Ultra-Low-Bit Quantization of Large Language Models Through Saliency-Aware Partial Retraining (2025)
- An Extra RMSNorm is All You Need for Fine Tuning to 1.58 Bits (2025)
- QUAD: Quantization and Parameter-Efficient Tuning of LLM with Activation Decomposition (2025)
- Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation (2025)
- Perturbation-efficient Zeroth-order Optimization for Hardware-friendly On-device Training (2025)
- HOT: Hadamard-based Optimized Training (2025)
- GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper