Mistral 7B Instruct v0.2 - GGUF

This is a quantized model for mistralai/Mistral-7B-Instruct-v0.2. Two quantization methods were used:

  • Q5_K_M: 5-bit, recommended, low quality loss.
  • Q4_K_M: 4-bit, recommended, offers balanced quality.

Description

This repo contains GGUF format model files for Mistral AI_'s Mistral 7B Instruct v0.2.

This model was quantized in Google Colab. Notebook link is here.

Downloads last month
17
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for wenqiglantz/Mistral-7B-Instruct-v0.2-GGUF

Quantized
(93)
this model