chenkq commited on
Commit
c2d541a
Β·
verified Β·
1 Parent(s): 296694a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -5
README.md CHANGED
@@ -99,25 +99,25 @@ from qwen_vl_utils import process_vision_info
99
 
100
  # default: Load the model on the available device(s)
101
  model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
102
- "Qwen/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
103
  )
104
 
105
  # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
106
  # model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
107
- # "Qwen/Qwen2.5-VL-7B-Instruct",
108
  # torch_dtype=torch.bfloat16,
109
  # attn_implementation="flash_attention_2",
110
  # device_map="auto",
111
  # )
112
 
113
  # default processer
114
- processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
115
 
116
  # The default range for the number of visual tokens per image in the model is 4-16384.
117
  # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
118
  # min_pixels = 256*28*28
119
  # max_pixels = 1280*28*28
120
- # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
121
 
122
  messages = [
123
  {
@@ -207,7 +207,7 @@ The model supports a wide range of resolution inputs. By default, it uses the na
207
  min_pixels = 256 * 28 * 28
208
  max_pixels = 1280 * 28 * 28
209
  processor = AutoProcessor.from_pretrained(
210
- "Qwen/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
211
  )
212
  ```
213
 
@@ -274,6 +274,26 @@ However, it should be noted that this method has a significant impact on the per
274
  At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
275
 
276
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
277
 
278
 
279
  ## Citation
 
99
 
100
  # default: Load the model on the available device(s)
101
  model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
102
+ "Qwen/Qwen2.5-VL-7B-Instruct-AWQ", torch_dtype="auto", device_map="auto"
103
  )
104
 
105
  # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
106
  # model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
107
+ # "Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
108
  # torch_dtype=torch.bfloat16,
109
  # attn_implementation="flash_attention_2",
110
  # device_map="auto",
111
  # )
112
 
113
  # default processer
114
+ processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct-AWQ")
115
 
116
  # The default range for the number of visual tokens per image in the model is 4-16384.
117
  # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
118
  # min_pixels = 256*28*28
119
  # max_pixels = 1280*28*28
120
+ # processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels)
121
 
122
  messages = [
123
  {
 
207
  min_pixels = 256 * 28 * 28
208
  max_pixels = 1280 * 28 * 28
209
  processor = AutoProcessor.from_pretrained(
210
+ "Qwen/Qwen2.5-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels
211
  )
212
  ```
213
 
 
274
  At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
275
 
276
 
277
+ ### Benchmark
278
+ #### Performance of Quantized Models
279
+ This section reports the generation performance of quantized models (including GPTQ and AWQ) of the Qwen2.5-VL series. Specifically, we report:
280
+
281
+ - MMMU_VAL (Accuracy)
282
+ - DocVQA_VAL (Accuracy)
283
+ - MMBench_DEV_EN (Accuracy)
284
+ - MathVista_MINI (Accuracy)
285
+
286
+ We use [VLMEvalkit](https://github.com/open-compass/VLMEvalKit) to evaluate all models.
287
+
288
+ | Model Size | Quantization | MMMU_VAL | DocVQA_VAL | MMBench_EDV_EN | MathVista_MINI |
289
+ | --- | --- | --- | --- | --- | --- |
290
+ | Qwen2.5-VL-72B-Instruct | BF16<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | 70.0 | 96.1 | 88.2 | 75.3 |
291
+ | | AWQ<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct-AWQ)) | 69.1 | 96.0 | 87.9 | 73.8 |
292
+ | Qwen2.5-VL-7B-Instruct | BF16<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | 58.4 | 94.9 | 84.1 | 67.9 |
293
+ | | AWQ<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct-AWQ)) | 55.6 | 94.6 | 84.2 | 64.7 |
294
+ | Qwen2.5-VL-3B-Instruct | BF16<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | 51.7 | 93.0 | 79.8 | 61.4 |
295
+ | | AWQ<br><sup>([πŸ€—](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ)[πŸ€–](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct-AWQ)) | 49.1 | 91.8 | 78.0 | 58.8 |
296
+
297
 
298
 
299
  ## Citation