Model Description
Qwen2.5-VL-7B-Instruct-Finetuned-Os-Atlas is a GUI grounding model finetuned from Qwen/Qwen2.5-VL-7B-Instruct.
This model used the OS-Copilot dataset for fine-tuning: OS-Copilot.
Evaluation Results
We evaluated our model using Screenspot on two benchmarks: Screenspot Pro and Screenspot v2.
We also include evaluation scripts used on these benchmarks. The table below compares our model's performance against the base model performance.
Model | size | Screenspot Pro | Screenspot v2 |
---|---|---|---|
Qwen2.5-VL-7B-Instruct | 7B | 11.0 | 55.0 |
Ours | |||
Qwen2.5-VL-7B-Instruct-Finetuned-Os-Atlas | 7B | 21.3 | 75.8 |
Note - The base model scores slightly lower than the mentioned scores in the paper because the prompts used for evaluation are not publicly available. We used the default prompts when evaluating the base and fine-tuned models.
Training procedure
This model was trained with SFT and LoRA.
Evaluation Scripts:
Evaluation scripts available here - Screenspot_Qwen2.5_VL
Quick Start
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Fintor/Qwen2.5-VL-7B-Instruct-Finetuned-Os-Atlas",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("Fintor/Qwen2.5-VL-7B-Instruct-Finetuned-Os-Atlas")
# Example input
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "path/to/image.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Batch inference
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages2]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
Citation
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Fintor/Fintor-GUI-S1
Base model
Qwen/Qwen2.5-VL-7B-Instruct