Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks. This model is an instruction-finetuned version of: Mistral-Small-3.1-24B-Base-2503.

Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.

It is ideal for:

Fast-response conversational agents. Low-latency function calling. Subject matter experts via fine-tuning. Local inference for hobbyists and organizations handling sensitive data. Programming and math reasoning. Long document understanding. Visual understanding.

For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.

Learn more about Mistral Small 3.1 in our blog post.

this version have a finetuning of dataset : louisbrulenaudet / code-securite-sociale

INT4 root directory

Downloads last month
11
Safetensors
Model size
13B params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for martossien/Mistral-Small-3.1-24B-Instruct-2503-code_secu_sociale_INT4

Dataset used to train martossien/Mistral-Small-3.1-24B-Instruct-2503-code_secu_sociale_INT4