GGUF version of Felladrin/Minueza-32Mx2-Chat.

It was not possible to quantize the model after converting it to F16/F32 GGUF, so only those versions are available, being F32 the recommended one for having better precision.

Downloads last month
3
GGUF
Model size
43M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Felladrin/gguf-Minueza-32Mx2-Chat

Quantized
(1)
this model