---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
base_model:
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
pipeline_tag: image-text-to-text
tags:
- neuralmagic
- redhat
- llmcompressor
- quantized
- FP8
---
# Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic
## Model Overview
- **Model Architecture:** Mistral3ForConditionalGeneration
- **Input:** Text / Image
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** FP8
- **Weight quantization:** FP8
- **Intended Use Cases:** It is ideal for:
- Fast-response conversational agents.
- Low-latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
- Programming and math reasoning.
- Long document understanding.
- Visual understanding.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages not officially supported by the model.
- **Release Date:** 04/15/2025
- **Version:** 1.0
- **Model Developers:** RedHat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing activations and weights of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) to FP8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
The [llm-compressor](https://github.com/vllm-project/llm-compressor) library is used for quantization.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoProcessor
model_id = "RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
processor = AutoProcessor.from_pretrained(model_id)
messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]
prompts = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
Creation details
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForImageTextToText, AutoProcessor
# Load model
model_stub = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
model_name = model_stub.split("/")[-1]
model = AutoModelForImageTextToText.from_pretrained(model_stub)
processor = AutoProcessor.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["language_model.lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
processor.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
Evaluation details
**MMLU**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks mmlu \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**ARC Challenge**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks arc_challenge \
--num_fewshot 25 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**GSM8k**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.9,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks gsm8k \
--num_fewshot 8 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Hellaswag**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks hellaswag \
--num_fewshot 10 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Winogrande**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks winogrande \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**TruthfulQA**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks truthfulqa \
--num_fewshot 0 \
--apply_chat_template\
--batch_size auto
```
**MMLU-pro**
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
--tasks mmlu_pro \
--num_fewshot 5 \
--apply_chat_template\
--fewshot_as_multiturn \
--batch_size auto
```
**Coding**
The commands below can be used for mbpp by simply replacing the dataset name.
*Generation*
```
python3 codegen/generate.py \
--model RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
*Sanitization*
```
python3 evalplus/sanitize.py \
humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic_vllm_temp_0.2
```
*Evaluation*
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic_vllm_temp_0.2-sanitized
```
Category | Benchmark | Mistral-Small-3.1-24B-Instruct-2503 | Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic (this model) |
Recovery |
---|---|---|---|---|
OpenLLM v1 | MMLU (5-shot) | 80.67 | 80.71 | 100.1% |
ARC Challenge (25-shot) | 72.78 | 72.87 | 100.1% | |
GSM-8K (5-shot, strict-match) | 65.35 | 62.47 | 95.6% | |
Hellaswag (10-shot) | 83.70 | 83.67 | 100.0% | |
Winogrande (5-shot) | 83.74 | 82.56 | 98.6% | |
TruthfulQA (0-shot, mc2) | 70.62 | 70.88 | 100.4% | |
Average | 76.14 | 75.53 | 99.2% | |
MMLU-Pro (5-shot) | 67.25 | 66.86 | 99.4% | |
GPQA CoT main (5-shot) | 42.63 | 41.07 | 99.4% | |
GPQA CoT diamond (5-shot) | 45.96 | 45.45 | 98.9% | |
Coding | HumanEval pass@1 | 84.70 | 84.70 | 100.0% |
HumanEval+ pass@1 | 79.50 | 79.30 | 99.8% | |
MBPP pass@1 | 71.10 | 70.00 | 98.5% | |
MBPP+ pass@1 | 60.60 | 59.50 | 98.2% |