-
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
Paper • 2504.10479 • Published • 250 -
OpenGVLab/InternVL3-1B
Image-Text-to-Text • Updated • 20.5k • 53 -
OpenGVLab/InternVL3-2B
Image-Text-to-Text • Updated • 12.4k • 18 -
OpenGVLab/InternVL3-8B
Image-Text-to-Text • Updated • 53.5k • 42

OpenGVLab
community
AI & ML interests
Computer Vision
Recent Activity
Organization Card
OpenGVLab
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.
Models
- InternVL: a pioneering open-source alternative to GPT-4V.
- InternImage: a large-scale vision foundation models with deformable convolutions.
- InternVideo: large-scale video foundation models for multimodal understanding.
- VideoChat: an end-to-end chat assistant for video comprehension.
- All-Seeing-Project: towards panoptic visual recognition and understanding of the open world.
Datasets
- ShareGPT4o: a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
- InternVid: a large-scale video-text dataset for multimodal understanding and generation.
- MMPR: a high-quality, large-scale multimodal preference dataset.
Benchmarks
- MVBench: a comprehensive benchmark for multimodal video understanding.
- CRPE: a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability.
- MM-NIAH: a comprehensive benchmark for long multimodal documents comprehension.
- GMAI-MMBench: a comprehensive multimodal evaluation benchmark towards general medical AI.
Collections
24
spaces
11
Runtime error
InternVideo2.5
💬
Hierarchical Compression for Long-Context Video Modeling
Running
457
InternVL
⚡
Chat with an AI that understands text and images
Running
37
MVBench Leaderboard
🐨
Submit model evaluation and view leaderboard
Running
on
Zero
17
InternVideo2 Chat 8B HD
👁
Upload a video to chat about its contents
Sleeping
10
ControlLLM
🚀
Display maintenance message for ControlLLM
Running
on
Zero
97
VideoMamba
🐍
Classify video and image content
models
217

OpenGVLab/InternVL3-38B-Pretrained
Image-Text-to-Text
•
Updated
•
25

OpenGVLab/InternVL3-14B-Pretrained
Image-Text-to-Text
•
Updated
•
19

OpenGVLab/InternVL3-9B-Pretrained
Image-Text-to-Text
•
Updated
•
34

OpenGVLab/InternVL3-8B-Pretrained
Image-Text-to-Text
•
Updated
•
31

OpenGVLab/InternVL3-2B-Pretrained
Image-Text-to-Text
•
Updated
•
62
•
1

OpenGVLab/InternVL3-1B-Pretrained
Image-Text-to-Text
•
Updated
•
46
•
2

OpenGVLab/InternVL3-38B-Instruct
Image-Text-to-Text
•
Updated
•
495
•
3

OpenGVLab/InternVL3-14B-Instruct
Image-Text-to-Text
•
Updated
•
1.44k
•
3

OpenGVLab/InternVL3-9B-Instruct
Image-Text-to-Text
•
Updated
•
412
•
2

OpenGVLab/InternVL3-8B-Instruct
Image-Text-to-Text
•
Updated
•
1.5k
•
2
datasets
41
OpenGVLab/InternVL-Data
Updated
•
4.15k
•
100
OpenGVLab/VisualPRM400K-v1.1
Preview
•
Updated
•
114
•
5
OpenGVLab/VisualPRM400K-v1.1-Raw
Preview
•
Updated
•
59
•
2
OpenGVLab/VisualPRM400K
Preview
•
Updated
•
312
•
8
OpenGVLab/MMPR-v1.2-prompts
Updated
•
380
•
1
OpenGVLab/MMPR-v1.2
Updated
•
438
•
16
OpenGVLab/MMPR-v1.1
Preview
•
Updated
•
239
•
46
OpenGVLab/MMPR
Preview
•
Updated
•
134
•
49
OpenGVLab/LongVid
Preview
•
Updated
•
40
•
2
OpenGVLab/NIAH-Video
Viewer
•
Updated
•
629
•
129