Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,9 @@ This repo contains the code and data for [VLM2Vec: Training Vision-Language Mode
|
|
19 |
|
20 |
<img width="1432" alt="abs" src="https://raw.githubusercontent.com/TIGER-AI-Lab/VLM2Vec/refs/heads/main/figures//train_vlm.png">
|
21 |
|
22 |
-
**We’ve released several VLM2Vec models built on different VLM backbones: https://huggingface.co/collections/TIGER-Lab/vlm2vec-6705f418271d085836e0cdd5
|
23 |
-
|
|
|
24 |
|
25 |
## Release
|
26 |
Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training. Our best results were based on Lora training with batch size of 1024. We also have checkpoint with full training with batch size of 2048. Our results on 36 evaluation datasets are:
|
|
|
19 |
|
20 |
<img width="1432" alt="abs" src="https://raw.githubusercontent.com/TIGER-AI-Lab/VLM2Vec/refs/heads/main/figures//train_vlm.png">
|
21 |
|
22 |
+
**We’ve released several VLM2Vec models built on different VLM backbones: https://huggingface.co/collections/TIGER-Lab/vlm2vec-6705f418271d085836e0cdd5**
|
23 |
+
|
24 |
+
**Also, the performance of these models is updated in the README of our GitHub repository: https://github.com/TIGER-AI-Lab/VLM2Vec/blob/main/README.md**
|
25 |
|
26 |
## Release
|
27 |
Our model is being trained on MMEB-train and evaluated on MMEB-eval with contrastive learning. We only use in-batch negatives for training. Our best results were based on Lora training with batch size of 1024. We also have checkpoint with full training with batch size of 2048. Our results on 36 evaluation datasets are:
|