Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ license: mit
|
|
4 |
## MLCD-ViT-bigG Model Card
|
5 |
|
6 |
|
|
|
|
|
7 |
MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions.
|
8 |
|
9 |
We adopted the official [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) and the official training dataset [LLaVA-NeXT-Data](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data) for evaluating the foundational visual models.
|
|
|
4 |
## MLCD-ViT-bigG Model Card
|
5 |
|
6 |
|
7 |
+
### 🙌 **[LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) now supports MLCD-ViT-bigG.**
|
8 |
+
|
9 |
MLCD-ViT-bigG is a state-of-the-art vision transformer model enhanced with 2D Rotary Position Embedding (RoPE2D), achieving superior performance on document understanding and visual question answering tasks. Developed by DeepGlint AI, this model demonstrates exceptional capabilities in processing complex visual-language interactions.
|
10 |
|
11 |
We adopted the official [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) and the official training dataset [LLaVA-NeXT-Data](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Data) for evaluating the foundational visual models.
|