This is a 5.0bpw/h8 quantized version of huihui-ai/QwQ-32B-abliterated using exllamav2 with this PR applied.

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Uninformed/QwQ-32B-abliterated-exl2-5bpw-h8

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Finetuned
(1)
this model