Commit
·
b8d8fe0
1
Parent(s):
5e787de
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: apache-2.0
|
|
4 |
|
5 |
# VCoder LLaVA-1.5-7b
|
6 |
|
7 |
-
VCoder LLaVA-1.5-7b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) model weights. It was introduced
|
8 |
|
9 |
VCoder is an adapter for improving existing Vision LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
|
10 |
|
|
|
4 |
|
5 |
# VCoder LLaVA-1.5-7b
|
6 |
|
7 |
+
VCoder LLaVA-1.5-7b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
|
8 |
|
9 |
VCoder is an adapter for improving existing Vision LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
|
10 |
|