praeclarumjj3 commited on
Commit
b8d8fe0
·
1 Parent(s): 5e787de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -4,7 +4,7 @@ license: apache-2.0
4
 
5
  # VCoder LLaVA-1.5-7b
6
 
7
- VCoder LLaVA-1.5-7b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) model weights. It was introduced in the paper [VCoder: Versatile Visual Encoder for Accurate Object-Level Perception with Large Language Models](https://arxiv.org/abs/2211.06220) by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
8
 
9
  VCoder is an adapter for improving existing Vision LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
10
 
 
4
 
5
  # VCoder LLaVA-1.5-7b
6
 
7
+ VCoder LLaVA-1.5-7b was trained on COST training dataset in December 2023. It uses the pretrained [LLaVA-1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) model weights. It was introduced by Jain et al. in [this repository](https://github.com/SHI-Labs/VCoder).
8
 
9
  VCoder is an adapter for improving existing Vision LLMs at object-level perception tasks with the use of perception modalities as control inputs while retaining performance on other tasks.
10