VoRA-7B-Instruct / README.md
Hon-Wong's picture
Improve model card (#1)
6598d80 verified
|
raw
history blame contribute delete
1.34 kB
---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- Hon-Wong/VoRA-7B-Base
datasets:
- Hon-Wong/VoRA-Recap-29M
---
# VoRA
* [ArXiv Paper](https://arxiv.org/abs/2503.20680)
* [Github](https://github.com/Hon-Wong/VoRA)
## Quickstart
The model can be used as follows:
```python
import torch
from transformers import AutoProcessor, AutoModelForCausalLM
model_name = "Hon-Wong/VoRA-7B-Instruct"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
conversation = [
{
"role":"user",
"content":[
{
"type":"image",
"url": "{image path or url}"
},
{
"type":"text",
"text":"<image> Describe this image."
}
]
}
]
model_inputs = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=True, return_tensors='pt', return_dict=True).to(model.device)
gen_kwargs = {"max_new_tokens": 1024, "eos_token_id": processor.tokenizer.eos_token_id}
with torch.inference_mode():
outputs = model.generate(model_inputs, **gen_kwargs)
output_text = processor.tokenizer.batch_decode(
outputs, skip_special_tokens=True
)
print(output_text)
```