VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction
Abstract
VLM-3R, a framework for Vision-Language Models, incorporates 3D reconstructive instruction tuning to process monocular video frames and perform embodied reasoning with robust visual-spatial and temporal contextual understanding.
The rapid advancement of Large Multimodal Models (LMMs) for 2D images and videos has motivated extending these models to understand 3D scenes, aiming for human-like visual-spatial intelligence. Nevertheless, achieving deep spatial understanding comparable to human capabilities poses significant challenges in model encoding and data acquisition. Existing methods frequently depend on external depth sensors for geometry capture or utilize off-the-shelf algorithms for pre-constructing 3D maps, thereby limiting their scalability, especially with prevalent monocular video inputs and for time-sensitive applications. In this work, we introduce VLM-3R, a unified framework for Vision-Language Models (VLMs) that incorporates 3D Reconstructive instruction tuning. VLM-3R processes monocular video frames by employing a geometry encoder to derive implicit 3D tokens that represent spatial understanding. Leveraging our Spatial-Visual-View Fusion and over 200K curated 3D reconstructive instruction tuning question-answer (QA) pairs, VLM-3R effectively aligns real-world spatial context with language instructions. This enables monocular 3D spatial assistance and embodied reasoning. To facilitate the evaluation of temporal reasoning, we introduce the Vision-Spatial-Temporal Intelligence benchmark, featuring over 138.6K QA pairs across five distinct tasks focused on evolving spatial relationships. Extensive experiments demonstrate that our model, VLM-3R, not only facilitates robust visual-spatial reasoning but also enables the understanding of temporal 3D context changes, excelling in both accuracy and scalability.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models (2025)
- Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness (2025)
- SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning (2025)
- Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation (2025)
- STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding? (2025)
- AdaToken-3D: Dynamic Spatial Gating for Efficient 3D Large Multimodal-Models Reasoning (2025)
- LLaVA-4D: Embedding SpatioTemporal Prompt into LMMs for 4D Scene Understanding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper