Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to Multimodality
Abstract
Token reduction in Transformer models, beyond efficiency, enhances multimodal integration, reduces hallucinations, and improves training stability in generative modeling.
In Transformer architectures, tokens\textemdash discrete units derived from raw data\textemdash are formed by segmenting inputs into fixed-length chunks. Each token is then mapped to an embedding, enabling parallel attention computations while preserving the input's essential information. Due to the quadratic computational complexity of transformer self-attention mechanisms, token reduction has primarily been used as an efficiency strategy. This is especially true in single vision and language domains, where it helps balance computational costs, memory usage, and inference latency. Despite these advances, this paper argues that token reduction should transcend its traditional efficiency-oriented role in the era of large generative models. Instead, we position it as a fundamental principle in generative modeling, critically influencing both model architecture and broader applications. Specifically, we contend that across vision, language, and multimodal systems, token reduction can: (i) facilitate deeper multimodal integration and alignment, (ii) mitigate "overthinking" and hallucinations, (iii) maintain coherence over long inputs, and (iv) enhance training stability, etc. We reframe token reduction as more than an efficiency measure. By doing so, we outline promising future directions, including algorithm design, reinforcement learning-guided token reduction, token optimization for in-context learning, and broader ML and scientific domains. We highlight its potential to drive new model architectures and learning strategies that improve robustness, increase interpretability, and better align with the objectives of generative modeling.
Community
Due to the quadratic computational complexity of Transformer self-attention, token reduction has been widely used as an efficiency strategy to balance compute cost, memory usage, and inference latency in vision, language, and multimodal models. However, viewing token reduction purely through the lens of efficiency is fundamentally limiting.
This paper posits token reduction as a core design principle in generative modeling, deeply integrated with both training and inference, to prioritize tokens that enhance downstream performance while preserving semantic integrity.
โญ Explore the Github repo: https://lnkd.in/er6i4pEQ
๐ A detailed list of papers organized by modality can be found in this Google Sheet, including a brief introduction of the task, token reduction type, contribution, and methodology for each paper.
awesome๏ผ๐๐๐
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Shifting AI Efficiency From Model-Centric to Data-Centric Compression (2025)
- CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models (2025)
- Think Twice, Act Once: Token-Aware Compression and Action Reuse for Efficient Inference in Vision-Language-Action Models (2025)
- STAR: Stage-Wise Attention-Guided Token Reduction for Efficient Large Vision-Language Models Inference (2025)
- ShortV: Efficient Multimodal Large Language Models by Freezing Visual Tokens in Ineffective Layers (2025)
- FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models (2025)
- Rethinking Causal Mask Attention for Vision-Language Inference (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper