Minitron-8B-Base-GGUF

Llama.cpp static quantization of nvidia/Minitron-8B-Base

Original Model: nvidia/Minitron-8B-Base
Original dtype: BF16 (bfloat16)
Quantized by: llama.cpp b3600
IMatrix dataset: here


Files

Common Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Minitron-8B-Base.Q8_0.gguf Q8_0 8.80GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q6_K.gguf Q6_K 6.79GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q4_K.gguf Q4_K 5.23GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q3_K.gguf Q3_K 4.40GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q2_K.gguf Q2_K 3.57GB โœ… Available โšช Static ๐Ÿ“ฆ No

All Quants

Filename Quant type File Size Status Uses IMatrix Is Split
Minitron-8B-Base.BF16.gguf BF16 16.55GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.FP16.gguf F16 16.55GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q8_0.gguf Q8_0 8.80GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q6_K.gguf Q6_K 6.79GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q5_K.gguf Q5_K 5.99GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q5_K_S.gguf Q5_K_S 5.83GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q4_K.gguf Q4_K 5.23GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q4_K_S.gguf Q4_K_S 4.97GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.IQ4_NL.gguf IQ4_NL 4.98GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.IQ4_XS.gguf IQ4_XS 4.77GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q3_K.gguf Q3_K 4.40GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q3_K_L.gguf Q3_K_L 4.77GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q3_K_S.gguf Q3_K_S 3.97GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.IQ3_M.gguf IQ3_M 4.13GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.IQ3_S.gguf IQ3_S 3.99GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.IQ3_XS.gguf IQ3_XS 3.87GB โœ… Available โšช Static ๐Ÿ“ฆ No
Minitron-8B-Base.Q2_K.gguf Q2_K 3.57GB โœ… Available โšช Static ๐Ÿ“ฆ No

Downloading using huggingface-cli

If you do not have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Download the specific file you want:

huggingface-cli download legraphista/Minitron-8B-Base-GGUF --include "Minitron-8B-Base.Q8_0.gguf" --local-dir ./

If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download legraphista/Minitron-8B-Base-GGUF --include "Minitron-8B-Base.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's

Inference

Llama.cpp

llama.cpp/main -m Minitron-8B-Base.Q8_0.gguf --color -i -p "prompt here"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
  2. Locate your GGUF chunks folder (ex: Minitron-8B-Base.Q8_0)
  3. Run gguf-split --merge Minitron-8B-Base.Q8_0/Minitron-8B-Base.Q8_0-00001-of-XXXXX.gguf Minitron-8B-Base.Q8_0.gguf
    • Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

Downloads last month
305
GGUF
Model size
8.27B params
Architecture
nemotron
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for legraphista/Minitron-8B-Base-GGUF

Quantized
(5)
this model