Mistral-7B-Instruct-v0.2-GGUF

Description

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

  • 32k context window (vs 8k context in v0.1)
  • Rope-theta = 1e6
  • No Sliding-Window Attention

For full details of this model please read our paper and release blog post.

Downloads last month
39
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/Mistral-7B-Instruct-v0.2-GGUF

Quantized
(92)
this model

Collection including QuantFactory/Mistral-7B-Instruct-v0.2-GGUF