nielsr HF Staff commited on
Commit
a87bc8d
·
verified ·
1 Parent(s): ac2cef3

Improve model card

Browse files

This PR adds more metadata to the model card, like the library name, pipeline tag, and base model.

Feel free to tweak if required.

Files changed (1) hide show
  1. README.md +40 -29
README.md CHANGED
@@ -1,37 +1,48 @@
 
 
 
 
 
 
 
 
 
1
  # VoRA
2
  * [ArXiv Paper](https://arxiv.org/abs/2503.20680)
3
  * [Github](https://github.com/Hon-Wong/VoRA)
4
 
5
  ## Quickstart
6
 
 
 
7
  ```python
8
- import torch
9
- from transformers import AutoProcessor, AutoModelForCausalLM
10
- model_name = "Hon-Wong/VoRA-7B-Instruct"
11
- processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
12
- model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
13
- conversation = [
14
- {
15
- "role":"user",
16
- "content":[
17
- {
18
- "type":"image",
19
- "url": "{image path or url}"
20
- },
21
- {
22
- "type":"text",
23
- "text":"<image> Describe this image."
24
- }
25
- ]
26
- }
27
- ]
28
- model_inputs = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=True, return_tensors='pt', return_dict=True).to(model.device)
29
- gen_kwargs = {"max_new_tokens": 1024, "eos_token_id": processor.tokenizer.eos_token_id}
30
-
31
- with torch.inference_mode():
32
- outputs = model.generate(model_inputs, **gen_kwargs)
33
- output_text = processor.tokenizer.batch_decode(
34
- outputs, skip_special_tokens=True
35
- )
36
- print(output_text)
37
  ```
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: image-text-to-text
4
+ base_model:
5
+ - Hon-Wong/VoRA-7B-Base
6
+ datasets:
7
+ - Hon-Wong/VoRA-Recap-29M
8
+ ---
9
+
10
  # VoRA
11
  * [ArXiv Paper](https://arxiv.org/abs/2503.20680)
12
  * [Github](https://github.com/Hon-Wong/VoRA)
13
 
14
  ## Quickstart
15
 
16
+ The model can be used as follows:
17
+
18
  ```python
19
+ import torch
20
+ from transformers import AutoProcessor, AutoModelForCausalLM
21
+ model_name = "Hon-Wong/VoRA-7B-Instruct"
22
+ processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
23
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
24
+ conversation = [
25
+ {
26
+ "role":"user",
27
+ "content":[
28
+ {
29
+ "type":"image",
30
+ "url": "{image path or url}"
31
+ },
32
+ {
33
+ "type":"text",
34
+ "text":"<image> Describe this image."
35
+ }
36
+ ]
37
+ }
38
+ ]
39
+ model_inputs = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=True, return_tensors='pt', return_dict=True).to(model.device)
40
+ gen_kwargs = {"max_new_tokens": 1024, "eos_token_id": processor.tokenizer.eos_token_id}
41
+
42
+ with torch.inference_mode():
43
+ outputs = model.generate(model_inputs, **gen_kwargs)
44
+ output_text = processor.tokenizer.batch_decode(
45
+ outputs, skip_special_tokens=True
46
+ )
47
+ print(output_text)
48
  ```