VideoGameBench: Can Vision-Language Models complete popular video games?
Abstract
VideoGameBench evaluates vision-language models' abilities in real-time video game interaction using only visual inputs and high-level objectives, highlighting challenges in human-like skills.
Vision-language models (VLMs) have achieved strong results on coding and math benchmarks that are challenging for humans, yet their ability to perform tasks that come naturally to humans--such as perception, spatial navigation, and memory management--remains understudied. Real video games are crafted to be intuitive for humans to learn and master by leveraging innate inductive biases, making them an ideal testbed for evaluating such capabilities in VLMs. To this end, we introduce VideoGameBench, a benchmark consisting of 10 popular video games from the 1990s that VLMs directly interact with in real-time. VideoGameBench challenges models to complete entire games with access to only raw visual inputs and a high-level description of objectives and controls, a significant departure from existing setups that rely on game-specific scaffolding and auxiliary information. We keep three of the games secret to encourage solutions that generalize to unseen environments. Our experiments show that frontier vision-language models struggle to progress beyond the beginning of each game. We find inference latency to be a major limitation of frontier models in the real-time setting; therefore, we introduce VideoGameBench Lite, a setting where the game pauses while waiting for the LM's next action. The best performing model, Gemini 2.5 Pro, completes only 0.48% of VideoGameBench and 1.6% of VideoGameBench Lite. We hope that the formalization of the human skills mentioned above into this benchmark motivates progress in these research directions.
Community
We have a new benchmark that challenges frontier VLMs to play DOS and Game Boy games from the 1990s. The top LM (Gemini) completes just 0.48% of the games in the benchmark.
https://vgbench.com has lots of clips and info.
We also have a code repository you can check out: https://github.com/alexzhang13/videogamebench
You can generate clips like these, e.g. Gemini 2.5 Pro playing Kirby's Dream Land:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- V-MAGE: A Game Evaluation Framework for Assessing Visual-Centric Capabilities in Multimodal Large Language Models (2025)
- lmgame-Bench: How Good are LLMs at Playing Games? (2025)
- TALES: Text Adventure Learning Environment Suite (2025)
- Measuring General Intelligence with Generated Games (2025)
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning (2025)
- Learning to Play Like Humans: A Framework for LLM Adaptation in Interactive Fiction Games (2025)
- VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper