Improve model card (#1)
Browse files- Improve model card (a87bc8d0a149a69e007c2907dae440295e772944)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,37 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# VoRA
|
2 |
* [ArXiv Paper](https://arxiv.org/abs/2503.20680)
|
3 |
* [Github](https://github.com/Hon-Wong/VoRA)
|
4 |
|
5 |
## Quickstart
|
6 |
|
|
|
|
|
7 |
```python
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
```
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
+
base_model:
|
5 |
+
- Hon-Wong/VoRA-7B-Base
|
6 |
+
datasets:
|
7 |
+
- Hon-Wong/VoRA-Recap-29M
|
8 |
+
---
|
9 |
+
|
10 |
# VoRA
|
11 |
* [ArXiv Paper](https://arxiv.org/abs/2503.20680)
|
12 |
* [Github](https://github.com/Hon-Wong/VoRA)
|
13 |
|
14 |
## Quickstart
|
15 |
|
16 |
+
The model can be used as follows:
|
17 |
+
|
18 |
```python
|
19 |
+
import torch
|
20 |
+
from transformers import AutoProcessor, AutoModelForCausalLM
|
21 |
+
model_name = "Hon-Wong/VoRA-7B-Instruct"
|
22 |
+
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
|
23 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
|
24 |
+
conversation = [
|
25 |
+
{
|
26 |
+
"role":"user",
|
27 |
+
"content":[
|
28 |
+
{
|
29 |
+
"type":"image",
|
30 |
+
"url": "{image path or url}"
|
31 |
+
},
|
32 |
+
{
|
33 |
+
"type":"text",
|
34 |
+
"text":"<image> Describe this image."
|
35 |
+
}
|
36 |
+
]
|
37 |
+
}
|
38 |
+
]
|
39 |
+
model_inputs = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=True, return_tensors='pt', return_dict=True).to(model.device)
|
40 |
+
gen_kwargs = {"max_new_tokens": 1024, "eos_token_id": processor.tokenizer.eos_token_id}
|
41 |
+
|
42 |
+
with torch.inference_mode():
|
43 |
+
outputs = model.generate(model_inputs, **gen_kwargs)
|
44 |
+
output_text = processor.tokenizer.batch_decode(
|
45 |
+
outputs, skip_special_tokens=True
|
46 |
+
)
|
47 |
+
print(output_text)
|
48 |
```
|