Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,12 @@ llava-dinov2-internlm2-7b-v1 is a LLaVA model fine-tuned from [InternLM2-Chat-7B
|
|
10 |
|
11 |
I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Example
|
14 |

|
15 |
Explain the photo in English:
|
|
|
10 |
|
11 |
I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
|
12 |
|
13 |
+
## Future development of Dinov2 based LLaVA
|
14 |
+
Using Dinov2 as the vision encoder of LLaVA may have some disadvantages. Unlike CLIP, Dinov2 is not pre-aligned with language embedding space. Even if you use both CLIP and Dinov2 and mix their tokens, the benchmark perfermance is not very strong (see arxiv:2401.06209 and the following table from their paper).
|
15 |
+

|
16 |
+
|
17 |
+
If you have any idea to improve it, please open an issue or just send an email to [email protected]. You are welcomed!
|
18 |
+
|
19 |
## Example
|
20 |

|
21 |
Explain the photo in English:
|