---- NOTE -----

I have deleted the mtp layers in order to make it work with llama.cpp. Quality might be degraded.

A proper implementation would be better, but this will work until that is implemented.

For more information feel free to open a discussion here.

Here is a tutorial on how I made these quants without mtp:

https://huggingface.co/XiaomiMiMo/MiMo-7B-RL/discussions/5

---- NOTE -----

GGUF Quants for: XiaomiMiMo/MiMo-7B-RL

Model by: XiaomiMiMo

Quants by: quantflex

Run with llama.cpp:

./llama-cli -m MiMo-7B-RL-nomtp-Q5_K_M.gguf -cnv

Downloads last month
1,054
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for quantflex/MiMo-7B-RL-nomtp-GGUF

Quantized
(3)
this model