--- license: apache-2.0 --- # Model Card: LaVA-Video-7B-Qwen2 ## **Overview:** This model card was not created by the original authors of the model. Instead, it represents a conversion of the original model to the Hugging Face format. The original model can be found [here](https://huggingface.co/lmms-lab/LLaVA-Video-7B-Qwen2). This conversion was performed using the Transformers conversion tutorial. The purpose of this conversion was to make the model compatible with vllm serving, as the original model is currently not supported by [vllm](https://github.com/vllm-project/vllm). ## **Conversion Details:** ### **Original Model Source:** [LLaVA-Video-7B-Qwen2](lmms-lab/LLaVA-Video-7B-Qwen2) by lmms-lab **Conversion:** Transformers conversion [tutorial](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llava/convert_llava_weights_to_hf.py) **Purpose of Conversion:** Serve the model via [vllm](https://github.com/vllm-project/vllm) **Converted By:** [Sanya Choi](https://huggingface.co/SanyaChoi) ### **Updated Conversion Model:** This model is a conversion of the original LLava Next Video weights. However, a better-converted version of this model has been made available by the VLLM contributors ([here](https://huggingface.co/Isotr0py/LLaVA-Video-7B-Qwen2-hf)). Made of original LLava One Vision weights. ### **Known Limitations:** Since this is a converted model, it may exhibit bugs or produce outputs that are inconsistent with the original model's performance and accuracy. These discrepancies are likely due to differences in compatibility and **are not** reflective of the original model's quality. ### **Usage** This model is best used for experimentation and tasks compatible with [vllm](https://github.com/vllm-project/vllm) serving. It is recommended to cross-reference outputs with the original model for critical applications to ensure reliability and correctness. ### **Acknowledgments** We acknowledge the original creators of the model for their work and contributions. The conversion and hosting of this model aim to expand its usability while maintaining the integrity of the [original work](https://github.com/LLaVA-VL/LLaVA-NeXT). ### **Disclaimers** This model is provided "as is" without warranty of any kind. Any issues, bugs, or errors encountered are solely related to the conversion process or platform compatibility and are not attributable to the [original model](https://huggingface.co/lmms-lab/LLaVA-Video-7B-Qwen2).