Qwen2.5-VL-72B-Instruct
Converted and quantized using HimariO's fork using this procedure. No IMatrix.
The fork is currently required to run inference and there's no guarantee these checkpoints will work with future builds. Temporary builds are available here. The latest tested build as of writing is qwen25-vl-b4899-bc4163b
.
Edit:
As of 1-April-2025 inference support has been added to koboldcpp.
Usage
./llama-qwen2vl-cli -m Qwen2.5-VL-72B-Instruct-Q4_K_M.gguf --mmproj qwen2.5-vl-72b-instruct-vision-f16.gguf -p "Please describe this image." --image ./image.jpg
- Downloads last month
- 718
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for samgreen/Qwen2.5-VL-72B-Instruct-GGUF
Unable to build the model tree, the base model loops to the model itself. Learn more.