VisualThinker-R1-Zero

TurningPoint

Paper Link๐Ÿ‘๏ธ

๐Ÿš€ Introduction

The recent DeepSeek-R1 demonstrated how reinforcement learning with simple rule-based reward can enable autonomous development of complex reasoning in large language models, characterized by the "aha moment", in which the model manifest self-reflection and increased response length during training. However, attempts to extend this success to multimodal reasoning often failed to reproduce these key characteristics. In this report, we present the first successful replication of these emergent characteristics for multimodal reasoning on only a non-SFT 2B model. Starting with Qwen2-VL-2B and applying reinforcement learning directly on the SAT dataset, our model achieves 59.47% accuracy on CVBench, outperforming the base model by approximately ~30% and exceeding both SFT setting by ~2%. In addition, we share our failed attempts and insights in attempting to achieve R1-like reasoning using RL with instruct models, aiming to shed light on the challenges involved. Our key observations include: (1) applying RL on instruct model often results in trivial reasoning trajectories, and (2) naive length reward are ineffective in eliciting reasoning capabilities. The project code is available at https://github.com/turningpoint-ai/VisualThinker-R1-Zero

๐Ÿ”ฎ Highlights

  1. We are the first to successfully produce the emergent โ€œaha momentโ€ and increased response length for multimodal reasoning on just a non-SFT 2B model.
  2. We showed that vision-centric tasks could also benefit from improved reasoning capabilities.

Similar to DeepSeek R1, self reflection behavior is also observed during our RL training on vision-centric reasoning tasks. The model exhibits an emergent ability to rethink and correct its mistakes:

. . .
Therefore, dark brown wooden bed with white blanket is not above the doorway.
But wait! I can think of something else.
Maybe it's just higher than above the doorway, but slightly lower than above the doorway.
. . .

โš™๏ธ Requirements and Installation

  • Python >= 3.10
  • Pytorch == 2.0.1
  • CUDA Version >= 11.7
  • Install required packages:
# install transformers
pip install git+https://github.com/huggingface/transformers
# install qwen-vl utils
pip install qwen-vl-utils

๐Ÿ’ป Model Downloads and Usage

from PIL import Image
import requests
from io import BytesIO
from transformers import AutoProcessor, AutoModelForImageTextToText

# Load model directly
processor = AutoProcessor.from_pretrained("AIcell/reproduce-1200")
model = AutoModelForImageTextToText.from_pretrained("AIcell/reproduce-1200"
                                                    ,torch_dtype="auto", device_map="auto")
model.eval()

# Prepare image input
image_url = "https://multimodal-r1.s3.us-west-1.amazonaws.com/demo_image.jpg"

# Prepare text input
question = "Considering the relative positions of the sofa and the picture in the image provided, where is the sofa located with respect to the picture? Select from the following choices.\n(A) above or \n(B) below"
prompt = f"A conversation between User and Assistant. The user asks a question about the image, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer.\nUser: {question} \nAssistant: Let me solve this step by step.\n<think>"

# Create Message
message = [

                        {
                            "type": "image",
                            "image": image_url,
                        },
                        {"type": "text", "text": "<image>" + prompt},
                    ]

# Process input
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))
text = processor.apply_chat_template(message, tokenize=False, add_generation_prompt=True)
input = processor(
                text=text,
                image=image,
                padding=True,
                return_tensors="pt",
            )
input = input.to("cuda")

# Generation of the output
generated_ids = model.generate(**input, use_cache=True, max_new_tokens=1024, do_sample=True)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(input.input_ids, generated_ids)
]
batch_output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)

# Get output
output_text = batch_output_text[0]
print(output_text)

๐Ÿ™Œ Stay Connected!

We are always open to engaging discussions, collaborations, or even just sharing a virtual coffee. To get in touch or join our team, visit TurningPoint AI's homepage for contact information.

๐Ÿ“– Acknowledgements

We sincerely thank DeepSeek, Open-R1, QwenVL, Open-R1-Multimodal, R1-V, SAT, and CV-Bench for providing open source resources that laid the foundation of our project.

๐Ÿค Contributors

Here are the key contributors from TurningPoint AI to this project:

Hengguang Zhou1* , Xirui Li1* , Ruochen Wang1โ€  , Minhao Cheng2, Tianyi Zhou3 and Cho-Jui Hsieh14

* Project Leads, โ€  Main Advisor 1University of California, Los Angeles, 2Penn State University, 3University of Maryland and 4Google Research

โœ๏ธ Citation

@misc{zhou2025r1zerosahamomentvisual,
      title={R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model}, 
      author={Hengguang Zhou and Xirui Li and Ruochen Wang and Minhao Cheng and Tianyi Zhou and Cho-Jui Hsieh},
      year={2025},
      eprint={2503.05132},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2503.05132}, 
}
Downloads last month
2
Safetensors
Model size
2.21B params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for turningpoint-ai/VisualThinker-R1-Zero

Base model

Qwen/Qwen2-VL-2B
Finetuned
(9)
this model

Dataset used to train turningpoint-ai/VisualThinker-R1-Zero