Zero-Shot Vision Encoder Grafting via LLM Surrogates
Abstract
The approach of training vision encoders with small surrogate models before transferring them to large language models reduces training costs and enhances performance.
Vision language models (VLMs) typically pair a modestly sized vision encoder with a large language model (LLM), e.g., Llama-70B, making the decoder the primary computational burden during training. To reduce costs, a potential promising strategy is to first train the vision encoder using a small language model before transferring it to the large one. We construct small "surrogate models" that share the same embedding space and representation language as the large target LLM by directly inheriting its shallow layers. Vision encoders trained on the surrogate can then be directly transferred to the larger model, a process we call zero-shot grafting -- when plugged directly into the full-size target LLM, the grafted pair surpasses the encoder-surrogate pair and, on some benchmarks, even performs on par with full decoder training with the target LLM. Furthermore, our surrogate training approach reduces overall VLM training costs by ~45% when using Llama-70B as the decoder.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models (2025)
- MASSV: Multimodal Adaptation and Self-Data Distillation for Speculative Decoding of Vision-Language Models (2025)
- The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer (2025)
- Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM (2025)
- Leveraging Decoder Architectures for Learned Sparse Retrieval (2025)
- TinyAlign: Boosting Lightweight Vision-Language Models by Mitigating Modal Alignment Bottlenecks (2025)
- DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper