Abstract
A variational lower bound of the information bottleneck principle is implemented to enhance the robustness of multimodal large language models under distribution shifts.
Despite widespread adoption, multimodal large language models (MLLMs) suffer performance degradation when encountering unfamiliar queries under distribution shifts. Existing methods to improve MLLM generalization typically require either more instruction data or larger advanced model architectures, both of which incur non-trivial human labor or computational costs. In this work, we take an alternative approach to enhance the robustness of MLLMs under distribution shifts, from a representation learning perspective. Inspired by the information bottleneck (IB) principle, we derive a variational lower bound of the IB for MLLMs and devise a practical implementation, Visual Instruction Bottleneck Tuning (Vittle). We then provide a theoretical justification of Vittle by revealing its connection to an information-theoretic robustness metric of MLLM. Empirical validation of three MLLMs on open-ended and closed-form question answering and object hallucination detection tasks over 45 datasets, including 30 shift scenarios, demonstrates that Vittle consistently improves the MLLM's robustness under shifts by pursuing the learning of a minimal sufficient representation.
Community
A new learning objective for robust visual instruction tuning!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning to Instruct for Visual Instruction Tuning (2025)
- ShortV: Efficient Multimodal Large Language Models by Freezing Visual Tokens in Ineffective Layers (2025)
- Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs (2025)
- CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning (2025)
- COMPACT: COMPositional Atomic-to-Complex Visual Capability Tuning (2025)
- PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models (2025)
- Dynamic Pyramid Network for Efficient Multimodal Large Language Model (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper