Kaichengalex commited on
Commit
5dce2ed
·
verified ·
1 Parent(s): 470d72e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -29,12 +29,12 @@ Yingda Chen,</span>
29
 
30
 
31
  <p align="center">
32
- <img src="figures/fig1.png" width="85%" height="85">
33
  </p>
34
 
35
 
36
  ## 🎺 News
37
- - [2025/04/24]: ✨We release the evaluate and demo code.
38
  - [2025/04/24]: ✨The paper of UniME is submitted to arxiv.
39
  - [2025/04/22]: ✨We release the model weight of UniME in [🤗 Huggingface](https://huggingface.co/collections/DeepGlint-AI/unime-6805fa16ab0071a96bef29d2)
40
 
@@ -42,12 +42,12 @@ Yingda Chen,</span>
42
  To enhance the MLLM's embedding capability, we propose textual discriminative knowledge distillation. The training process involves decoupling the MLLM's LLM component and processing text with the prompt "Summarize the above sentences in one word.", followed by aligning the student (MLLM) and teacher (NV-Embed V2) embeddings via KL divergence on batch-wise similarity distributions. **Notably, only the LLM component is fine-tuned during this process, while all other parameters remain frozen**.
43
 
44
  <p align="center">
45
- <img src="figures/fig2.png" width="85%" >
46
  </p>
47
 
48
  After that, we propose hard negative enhanced instruction tuning enhances multimodal systems by improving visual sensitivity, strengthening cross-modal alignment, and boosting instruction-following capabilities. At its core are two key innovations: a false negative filtering mechanism using a similarity threshold to eliminate misleading samples, and an automatic hard negative sampling strategy that selects top-k similar but non-matching examples to increase training difficulty.
49
  <p align="center">
50
- <img src="figures/fig3.png" width="85%" >
51
  </p>
52
 
53
 
@@ -103,12 +103,12 @@ print("Score: ", Score)
103
  ## 🔢 Results
104
  ### Diverse Retrieval
105
  <p align="center">
106
- <img src="figures/res1.png" width="85%" >
107
  </p>
108
 
109
  ### MMEB
110
  <p align="center">
111
- <img src="figures/res2.png" width="85%" >
112
  </p>
113
 
114
  ## 📖 Citation
 
29
 
30
 
31
  <p align="center">
32
+ <img src="figures/fig1.png">
33
  </p>
34
 
35
 
36
  ## 🎺 News
37
+ - [2025/04/24]: ✨We release the evaluation and demo code.
38
  - [2025/04/24]: ✨The paper of UniME is submitted to arxiv.
39
  - [2025/04/22]: ✨We release the model weight of UniME in [🤗 Huggingface](https://huggingface.co/collections/DeepGlint-AI/unime-6805fa16ab0071a96bef29d2)
40
 
 
42
  To enhance the MLLM's embedding capability, we propose textual discriminative knowledge distillation. The training process involves decoupling the MLLM's LLM component and processing text with the prompt "Summarize the above sentences in one word.", followed by aligning the student (MLLM) and teacher (NV-Embed V2) embeddings via KL divergence on batch-wise similarity distributions. **Notably, only the LLM component is fine-tuned during this process, while all other parameters remain frozen**.
43
 
44
  <p align="center">
45
+ <img src="figures/fig2.png">
46
  </p>
47
 
48
  After that, we propose hard negative enhanced instruction tuning enhances multimodal systems by improving visual sensitivity, strengthening cross-modal alignment, and boosting instruction-following capabilities. At its core are two key innovations: a false negative filtering mechanism using a similarity threshold to eliminate misleading samples, and an automatic hard negative sampling strategy that selects top-k similar but non-matching examples to increase training difficulty.
49
  <p align="center">
50
+ <img src="figures/fig3.png">
51
  </p>
52
 
53
 
 
103
  ## 🔢 Results
104
  ### Diverse Retrieval
105
  <p align="center">
106
+ <img src="figures/res1.png">
107
  </p>
108
 
109
  ### MMEB
110
  <p align="center">
111
+ <img src="figures/res2.png">
112
  </p>
113
 
114
  ## 📖 Citation