--- license: apache-2.0 datasets: - OS-Copilot/OS-Atlas-data language: - en base_model: - bytedance-research/UI-TARS-7B-DPO pipeline_tag: image-text-to-text library_name: transformers tags: - multimodel - gui --- ## Model Description Ui-Tars-7B-Instruct-Finetuned-Os-Atlas is a GUI grounding model finetuned from [**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO). This model used the OS-Copilot dataset for fine-tuning: [OS-Copilot](https://huggingface.co/datasets/OS-Copilot/OS-Atlas-data/tree/main). ## Evaluation Results We evaluated our model using [Screenspot](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding) on two benchmarks: Screenspot Pro and Screenspot v2. We also include evaluation scripts used on these benchmarks. The table below compares our model's performance against the base model performance. | Model | size | Screenspot Pro | Screenspot v2 | |-------|:----:|:--------------:|:----------:| | [UI-TARS-7B-DPO](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO) | 7B | 27.0 | 83.0 | | **Ours** | | | | | **Ui-Tars-7B-Instruct-Finetuned-Os-Atlas** | 7B | **33.0** | **91.8** | **Note - The base model scores slightly lower than the mentioned scores in the paper because the prompts used for evaluation are not publicly available. We used the default prompts when evaluating the base and fine-tuned models.** ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/am_fintor-neuralleap/huggingface/runs/hl90xquy?nw=nwuseram_fintor) This model was trained with SFT and LoRA. ### Evaluation Scripts: Evaluation scripts available here - [Screenspot_Ui-Tars](https://github.com/ma-neuralleap/ScreenSpot-Pro-GUI-Grounding/blob/main/models/uitaris.py) ### Quick Start ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto", ) # default processer processor = AutoProcessor.from_pretrained("Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas") # Example input messages = [ { "role": "user", "content": [ { "type": "image", "image": "path/to/image.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` ## Citation