Minitron-4B-Base-GGUF

Llama.cpp static quantization of nvidia/Minitron-4B-Base

Original Model: nvidia/Minitron-4B-Base
Original dtype: BF16 (bfloat16)
Quantized by: llama.cpp b3600
IMatrix dataset: here


Files

Common Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Minitron-4B-Base.Q8_0.gguf Q8_0 4.46GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q6_K.gguf Q6_K 3.45GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q4_K.gguf Q4_K 2.70GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q3_K.gguf Q3_K 2.30GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q2_K.gguf Q2_K 1.90GB โœ… Available โšช Static ๐Ÿ“ฆ No

All Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Minitron-4B-Base.BF16.gguf BF16 8.39GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.FP16.gguf F16 8.39GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q8_0.gguf Q8_0 4.46GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q6_K.gguf Q6_K 3.45GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q5_K.gguf Q5_K 3.06GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q5_K_S.gguf Q5_K_S 2.99GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q4_K.gguf Q4_K 2.70GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q4_K_S.gguf Q4_K_S 2.58GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.IQ4_NL.gguf IQ4_NL 2.58GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.IQ4_XS.gguf IQ4_XS 2.48GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q3_K.gguf Q3_K 2.30GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q3_K_L.gguf Q3_K_L 2.45GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q3_K_S.gguf Q3_K_S 2.12GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.IQ3_M.gguf IQ3_M 2.18GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.IQ3_S.gguf IQ3_S 2.12GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.IQ3_XS.gguf IQ3_XS 2.06GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-4B-Base.Q2_K.gguf Q2_K 1.90GB โœ… Available โšช Static ๐Ÿ“ฆ No

Downloading using huggingface-cli

If you do not have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Download the specific file you want:

huggingface-cli download legraphista/Minitron-4B-Base-GGUF --include "Minitron-4B-Base.Q8_0.gguf" --local-dir ./

If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download legraphista/Minitron-4B-Base-GGUF --include "Minitron-4B-Base.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's

Inference

Llama.cpp

llama.cpp/main -m Minitron-4B-Base.Q8_0.gguf --color -i -p "prompt here"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
  2. Locate your GGUF chunks folder (ex: Minitron-4B-Base.Q8_0)
  3. Run gguf-split --merge Minitron-4B-Base.Q8_0/Minitron-4B-Base.Q8_0-00001-of-XXXXX.gguf Minitron-4B-Base.Q8_0.gguf
    • Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

Downloads last month
95
GGUF
Model size
4.19B params
Architecture
nemotron
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for legraphista/Minitron-4B-Base-GGUF

Quantized
(10)
this model