huihui-ai/QwQ-32B-abliterated
Exl2 quant, Fixed chat template to generate first tag using tabbyAPI and others servers
This is an uncensored version of Qwen/QwQ-32B created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Use with ollama
You can use huihui_ai/qwq-abliterated directly
ollama run huihui_ai/qwq-abliterated
All versions of ollama from Q2_K to fp16 are supported.
Donation
If you like it, please click 'like' and follow us for more updates.
You can follow x.com/support_huihui to get the latest model information from huihui.ai.
Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support