Mistral-Quine-24B-GGUF (mistral-small-3.1-24b-instruct-2503-jackterated-GGUF)

This is an experimental version, just text for now, for more information about the Abliterated technique, refer to this notebook and check out @FailSpy.

Downloads last month
122
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JackCloudman/mistral-small-3.1-24b-instruct-2503-jackterated-GGUF