|
--- |
|
license: cc-by-nc-4.0 |
|
inference: false |
|
--- |
|
|
|
<br> |
|
<br> |
|
|
|
# LLaVA-Plus Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills |
|
|
|
**Model date:** |
|
LLaVA-Plus-v0-7b was trained in September 2023. |
|
|
|
**Paper or resources for more information:** |
|
https://llava-vl.github.io/llava-plus/ |
|
|
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/LLaVA-VL/LLaVA-Plus-Codebase/issues |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use of LLaVA is research on large multimodal models and chatbots. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
|
|
|
## Training dataset |
|
https://huggingface.co/datasets/LLaVA-VL/llava-plus-data |
|
|
|
|