ZR1-1.5B GGUF Models

Choosing the Right Model Format

Selecting the correct model format depends on your hardware capabilities and memory constraints.

BF16 (Brain Float 16) – Use if BF16 acceleration is available

  • A 16-bit floating-point format designed for faster computation while retaining good precision.
  • Provides similar dynamic range as FP32 but with lower memory usage.
  • Recommended if your hardware supports BF16 acceleration (check your device's specs).
  • Ideal for high-performance inference with reduced memory footprint compared to FP32.

📌 Use BF16 if:
✔ Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
✔ You want higher precision while saving memory.
✔ You plan to requantize the model into another format.

📌 Avoid BF16 if:
❌ Your hardware does not support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.


F16 (Float 16) – More widely supported than BF16

  • A 16-bit floating-point high precision but with less of range of values than BF16.
  • Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
  • Slightly lower numerical precision than BF16 but generally sufficient for inference.

📌 Use F16 if:
✔ Your hardware supports FP16 but not BF16.
✔ You need a balance between speed, memory usage, and accuracy.
✔ You are running on a GPU or another device optimized for FP16 computations.

📌 Avoid F16 if:
❌ Your device lacks native FP16 support (it may run slower than expected).
❌ You have memory limitations.


Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference

Quantization reduces model size and memory usage while maintaining as much accuracy as possible.

  • Lower-bit models (Q4_K)Best for minimal memory usage, may have lower precision.
  • Higher-bit models (Q6_K, Q8_0)Better accuracy, requires more memory.

📌 Use Quantized Models if:
✔ You are running inference on a CPU and need an optimized model.
✔ Your device has low VRAM and cannot load full-precision models.
✔ You want to reduce memory footprint while keeping reasonable accuracy.

📌 Avoid Quantized Models if:
❌ You need maximum accuracy (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).


Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)

These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.

  • IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.

    • Use case: Best for ultra-low-memory devices where even Q4_K is too large.
    • Trade-off: Lower accuracy compared to higher-bit quantizations.
  • IQ3_S: Small block size for maximum memory efficiency.

    • Use case: Best for low-memory devices where IQ3_XS is too aggressive.
  • IQ3_M: Medium block size for better accuracy than IQ3_S.

    • Use case: Suitable for low-memory devices where IQ3_S is too limiting.
  • Q4_K: 4-bit quantization with block-wise optimization for better accuracy.

    • Use case: Best for low-memory devices where Q6_K is too large.
  • Q4_0: Pure 4-bit quantization, optimized for ARM devices.

    • Use case: Best for ARM-based devices or low-memory environments.

Summary Table: Model Format Selection

Model Format Precision Memory Usage Device Requirements Best Use Case
BF16 Highest High BF16-supported GPU/CPUs High-speed inference with reduced memory
F16 High High FP16-supported devices GPU inference when BF16 isn't available
Q4_K Medium Low Low CPU or Low-VRAM devices Best for memory-constrained environments
Q6_K Medium Moderate CPU with more memory Better accuracy while still being quantized
Q8_0 High Moderate CPU or GPU with enough VRAM Best accuracy among quantized models
IQ3_XS Very Low Very Low Ultra-low-memory devices Extreme memory efficiency and low accuracy
Q4_0 Low Low ARM or low-memory devices llama.cpp can optimize for ARM devices

Included Files & Details

ZR1-1.5B-bf16.gguf

  • Model weights preserved in BF16.
  • Use this if you want to requantize the model into a different format.
  • Best if your device supports BF16 acceleration.

ZR1-1.5B-f16.gguf

  • Model weights stored in F16.
  • Use if your device supports FP16, especially if BF16 is not available.

ZR1-1.5B-bf16-q8_0.gguf

  • Output & embeddings remain in BF16.
  • All other layers quantized to Q8_0.
  • Use if your device supports BF16 and you want a quantized version.

ZR1-1.5B-f16-q8_0.gguf

  • Output & embeddings remain in F16.
  • All other layers quantized to Q8_0.

ZR1-1.5B-q4_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q4_K.
  • Good for CPU inference with limited memory.

ZR1-1.5B-q4_k_s.gguf

  • Smallest Q4_K variant, using less memory at the cost of accuracy.
  • Best for very low-memory setups.

ZR1-1.5B-q6_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q6_K .

ZR1-1.5B-q8_0.gguf

  • Fully Q8 quantized model for better accuracy.
  • Requires more memory but offers higher precision.

ZR1-1.5B-iq3_xs.gguf

  • IQ3_XS quantization, optimized for extreme memory efficiency.
  • Best for ultra-low-memory devices.

ZR1-1.5B-iq3_m.gguf

  • IQ3_M quantization, offering a medium block size for better accuracy.
  • Suitable for low-memory devices.

ZR1-1.5B-q4_0.gguf

  • Pure Q4_0 quantization, optimized for ARM devices.
  • Best for low-memory environments.
  • Prefer IQ4_NL for better accuracy.

🚀 If you find these models useful

Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
👉 Free Network Monitor

💬 How to test:

  1. Click the chat icon (bottom right on any page)
  2. Choose an AI assistant type:
    • TurboLLM (GPT-4-mini)
    • FreeLLM (Open-source)
    • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap scans
    • Quantum-readiness checks
    • Metasploit integration

🟡 TestLLM – Current experimental model (llama.cpp on 6 CPU threads):

  • Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs)
  • 🔧 Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟢 TurboLLM – Uses gpt-4-mini for:

🔵 HugLLM – Open-source models (≈8B params):

  • 2x more tokens than TurboLLM
  • AI-powered log analysis
  • 🌐 Runs on Hugging Face Inference API

💡 Example AI Commands to Test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a quick Nmap vulnerability test"

ZR1-1.5B

ZR1-1.5B is a small reasoning model trained extensively on both verified coding and mathematics problems with reinforcement learning. The model outperforms Llama-3.1-70B-Instruct on hard coding tasks and improves upon the base R1-Distill-1.5B model by over 50%, while achieving strong scores on math evaluations and a 37.91% pass@1 accuracy on GPQA-Diamond with just 1.5B parameters.

ZR1-1.5B LiveBench evaluation results on LiveBench with greedy sampling: the model is very token efficient

Data

For training we utilized the PRIME Eurus-2-RL dataset which combines the following math and code datasets:

  • NuminaMath-CoT
  • APPS, CodeContests, TACO, and Codeforces train set

We filtered math data by validating that questions are correctly graded when calling the evaluator with reference ground truth, and we removed all code examples with an empty list of test cases. Our final dataset comprised roughly 400k math + 25k code samples.

Training Recipe

We employ PRIME (Process Reinforcement through IMplicit rEwards), an online RL algorithm with process rewards, motivated by the improvement over GPRO demonstrated in the paper, as well as potentially more accurate token-level rewards due to the learned process reward model. We used the training batch accuracy filtering method from PRIME for training stability, and the iterative context lengthening technique demonstrated in DeepScaleR for faster training, which has also been shown to improve token efficiency. After a warmup period with maximum generation length set to 12k tokens, we sequentially increased the maximum generation length during training, starting at 8k tokens before increasing to 16k and 24k.

We trained on a single 8xH100 node with the following specific algorithmic details.

  • PRIME + RLOO with token-level granularity
  • No <think> token prefill. 0.1 format reward/penalty
  • Main train batch size 256 with n=4 samples per prompt. veRL dynamic batch size with max batch size set per GPU to support training with large generation length
  • Max prompt length 1536, generation length increase over training. Started with 12k intended to ease model into shorter generation length training
  • 12384 -> 8192 -> 16384 -> 24448
  • Start with 1 PPO epoch, increase to 4 during 24k stage
  • Accuracy filtering 0.2-0.8 and relax to 0.01-0.99 during 24k stage
  • Oversample batches 2x for accuracy filtering

And the following training hyperparameters:

  • KL coefficient 0 (no KL divergence term)
  • Entropy coefficient 0.001
  • Actor LR 5e-7
  • Reward beta train 0.05
  • Reward LR 1e-6
  • Reward grad clip 10
  • Reward RM coefficient 5

Evaluation

Coding

Leetcode LCB_generation
ZR1-1.5B 40% 39.74%
R1-Distill-Qwen-1.5B 12.22% 24.36%
DeepCoder-1.5B 21.11% 35.90%
OpenHands-LM-1.5B 18.88% 29.49%
Qwen2.5-1.5B-Instruct 20.56% 24.36%
Qwen2.5-Coder-3B-Instruct 35.55% 39.74%
Llama-3.1-8B-Instruct 14.44% 23.08%
Llama-3.1-70B-Instruct 37.22% 34.62%
Eurus-2-7B-PRIME 34.44% 32.05%
Mistral-Small-2503 - 38.46%
Gemma-3-27b-it - 39.74%
Claude-3-Opus - 37.18%

LiveBench

Model AMPS Hard Math_Comp LCB_Generation Coding_Completion
ZR1-1.5B 74% 60.42% 39.74% 12%
DeepCoder-1.5B 69% 61.46% 35.90% 12%
DeepScaleR-1.5B 64% 50% 24.36% 6%
OpenHands-LM-1.5B 24% 29.48% 29.49% 8%
R1-Distill-1.5B 54% 37.50% 24.36% 6%
Qwen2.5-1.5B-Instruct 38% 20.83% 24.36% 4%
Qwen2.5-Math-1.5B-Instruct 49% 36.46% 0% 0%
Qwen2.5-3B-Instruct 41% 17.71% 28.21% 10%
R1-Distill-7B 74% 61.46% 44.87% 14%
Qwen2.5-7B-Instruct 56% 29.17% 38.46% 40%
Qwen2.5-Math-7B-Instruct 62% 45.83% 16.67% 4%
R1-Distill-14B 77% 69.79% 64.10% 18%
Qwen2.5-14B-Instruct 59% 43.75% 46.15% 54%
R1-Distill-32B 74% 75% 60.26% 26%
QwQ-32B-Preview 78% 67.71% 52.56% 22%
QwQ-32B 83% 87.5% 87.18% 46%
Qwen2.5-32B-Instruct 62% 54.17% 51.23% 54%
Qwen2.5-Coder-32B-Instruct 48% 53.13% 55.13% 58%
R1-Distill-Llama-70B* 65% 78.13% 69.23% 34%
Qwen2.5-72B-Instruct 66% 52.08% 50% 62%
Qwen2.5-Math-72B-Instruct 56% 59.38% 42.31% 42%
DeepSeek-R1* 88% 88.54% 79.48% 54%

General Math

model AIME24 AIME25 AMC22_23 AMC24 GPQA-D MATH500 Minerva Olympiad
ZR1-1.5B 33.75% 27.29% 72.06% 59.17% 37.91% 88.34% 33.52% 56.87%
ZR1-1.5B (greedy) 40% 26.67% 71.08% 53.33% 37.88% 89.40% 32.72% 57.93%
DeepScaleR-1.5B 42.92% 27.71% 74.40% 60.69% 34.66% 89.36% 35.50% 59.37%
DeepScaleR-1.5B (greedy) 33.33% 33.33% 67.47% 57.77% 29.29% 84.60% 31.62% 52.44%
DeepCoder-1.5B 41.88% 24.79% 75.30% 59.72% 36.46% 83.60% 32.01% 56.39%
Still-3-1.5B 31.04% 23.54% 65.51% 56.94% 34.56% 86.55% 33.50% 53.55%
Open-RS3-1.5B 31.67% 23.75% 64.08% 51.67% 35.61% 84.65% 29.46% 52.13%
R1-Distill-1.5B 28.96% 22.50% 63.59% 50.83% 33.87% 84.65% 31.39% 51.11%
R1-Distill-1.5B (greedy) 26.67% 13.33% 51.81% 24.44% 30.81% 73.40% 25.74% 40%
Qwen2.5-Math-1.5B-Instruct (greedy) 10% 6.67% 42.17% 26.67% 28.28% 75.20% 28.31% 40.74%
Qwen2.5-Math-7B-Instruct (greedy) 20% 3.33% 46.99% 31.11% 32.32% 83% 37.13% 42.22%
Qwen2.5-Math-72B-Instruct (greedy) 26.67% 6.67% 59.04% 46.67% 43.94% 85.40% 42.65% 50.37%
Eurus-2-7B-PRIME (greedy) 20% 13.33% 56.62% 40% 36.36% 81.20% 36.76% 44.15%
DeepHermes-3-Llama-3-3B (think prompt, greedy) 0% 3.33% 12.05% 11.11% 30.30% 34.40% 10.66% 10.52%
OpenHands-LM-1.5B (greedy) 0% 0% 10.84% 4.44% 23.74% 36.80% 12.50% 10.22%

Short CoT

Our direct answer system prompt was: “Give a direct answer without thinking first.”

The table reports the average greedy pass@1 score across the following math evals: AIME24, AIME25, AMC22_23, AMC24, GPQA-Diamond, MATH-500, MinervaMath, OlympiadBench

avg pass@1 max_tokens
ZR1-1.5B 51.13% 32768
ZR1-1.5B (truncated) 46.83% 4096
ZR1-1.5B (direct answer prompt) 45.38% 4096
ZR1-1.5B (truncated) 40.39% 2048
ZR1-1.5B (direct answer prompt) 37% 2048
Qwen-2.5-Math-1.5B-Instruct 32.25% 2048
Qwen-2.5-Math-7B-Instruct 37.01% 2048

For Leetcode and LiveBench, we report pass@1 accuracy with greedy sampling. For the rest of the evaluations we report pass@1 accuracy averaged over 16 samples per question, with temperature 0.6 and top_p 0.95.

We use the following settings for SGLang:

python -m sglang.launch_server --model-path <model> --host 0.0.0.0 --port 5001 --mem-fraction-static=0.8 --dtype bfloat16 --random-seed 0 --chunked-prefill-size -1 --attention-backend triton --sampling-backend pytorch --disable-radix-cache --disable-cuda-graph-padding  --disable-custom-all-reduce --disable-mla --triton-attention-reduce-in-fp32

For vllm we disable prefix caching and chunked prefill.

Downloads last month
334
GGUF
Model size
1.78B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mungert/ZR1-1.5B-GGUF

Quantized
(195)
this model

Datasets used to train Mungert/ZR1-1.5B-GGUF