
D_AU - Thinking / Reasoning Models - Reg and MOEs.
QwQ,DeepSeek, EXONE, DeepHermes, and others "thinking/reasoning" AIs / LLMs in regular model type, MOE (mix of experts), and Hybrid model formats.
- Updated • 4.81k • 59
DavidAU/Reka-Flash-3-21B-Reasoning-Uncensored-MAX-NEO-Imatrix-GGUF
Text Generation • Updated • 59.7k • 47DavidAU/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-gguf
Text Generation • Updated • 870 • 20DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B-GGUF
Text Generation • Updated • 577 • 11
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-GGUF
Text Generation • Updated • 1.33k • 10Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepSeek-V2-Grand-Horror-SMB-R1-Distill-Llama-3.1-Uncensored-16.5B-GGUF
Text Generation • Updated • 682 • 10DavidAU/DeepSeek-Grand-Horror-SMB-R1-Distill-Llama-3.1-16B-GGUF
Text Generation • Updated • 476 • 7DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • Updated • 2.32k • 8DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf
Text Generation • Updated • 877 • 8DavidAU/DeepSeek-BlackRoot-R1-Distill-Llama-3.1-8B-GGUF
Text Generation • Updated • 449 • 6
DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-gguf
Text Generation • Updated • 280 • 6Note MOE - Mixture of Experts version. This model has 8 times the power of a standard 3B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-gguf
Updated • 965 • 6DavidAU/Llama-3.1-DeepSeek-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • Updated • 770 • 5
DavidAU/Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32-8.71B-gguf
Text Generation • Updated • 251 • 5Note MOE - Mixture of Experts version. This model has 6 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B-GGUF
Text Generation • Updated • 297 • 3Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf
Text Generation • Updated • 1.28k • 4Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepHermes-3-Llama-3-8B-Preview-16.5B-Brainstorm-gguf
Text Generation • Updated • 347 • 3DavidAU/DeepSeek-R1-Distill-Qwen-25.5B-Brainstorm-gguf
Text Generation • Updated • 549 • 2
DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-gguf
Text Generation • Updated • 587 • 4Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 7B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Deep-Reasoning-Llama-3.2-10pack-f16-gguf
Text Generation • Updated • 2.26k • 1Note Links to all 10 models in GGUF (regular and Imatrix) format also on this page.
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-gguf
Text Generation • Updated • 154 • 1DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B
Text Generation • Updated • 12 • 1DavidAU/Deep-Reasoning-Llama-3.2-JametMini-3B-MK.III
Text Generation • Updated • 8 • 1DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B
Text Generation • Updated • 11 • 1DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B
Text Generation • Updated • 14 • 1DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • Updated • 1.34k • 3DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B
Text Generation • Updated • 8 • 1DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B
Text Generation • UpdatedDavidAU/Deep-Reasoning-Llama-3.2-COT-3B
Text Generation • Updated • 1DavidAU/Deep-Reasoning-Llama-3.2-Dolphin3.0-3B
Text Generation • Updated • 2DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B
Text Generation • Updated • 2DavidAU/Deep-Reasoning-Llama-3.2-ShiningValiant2-3B
Text Generation • Updated • 3DavidAU/Deep-Reasoning-Llama-3.2-BlackSheep-3B
Text Generation • UpdatedDavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-HORROR-Imatrix-GGUF
Text Generation • Updated • 1.74kDavidAU/EXAONE-Deep-2.4B-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • Updated • 395 • 3DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • Updated • 980 • 1DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-Horror-Imatrix-MAX-8B-GGUF
Text Generation • Updated • 5.99k • 3DavidAU/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • Updated • 3.65k
DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-Hermes-R1-Uncensored-36B
Text Generation • Updated • 133Note MOE - Mixture of Experts version. This model has 6 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-GGUF
Text Generation • Updated • 2.12kNote MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-i1-GGUF
Updated • 17.1k • 1Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-GGUF
Text Generation • Updated • 4.62kNote MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-i1-GGUF
Updated • 9.8kNote MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
Updated • 16.5k • 2Note Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
Updated • 651 • 1Note Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-Dark-Reasoning-Halu-Blackroot-Hermes-R1-Uncensored-8B
Text Generation • Updated • 23 • 1
DavidAU/L3.1-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-8B
Text Generation • Updated • 9 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Jamet-8B-MK.I-Hermes-R1-Uncensored-8B
Text Generation • Updated • 29 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Anjir-Hermes-R1-Uncensored-8B
Text Generation • Updated • 22 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Celeste-V1.2-Hermes-R1-Uncensored-8B
Text Generation • Updated • 28 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 110Note Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now.
DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE
Text Generation • Updated • 45Note SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too.
DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them
Text Generation • Updated • 5DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-HORROR-R1-Uncensored-36B-GGUF
Text Generation • Updated • 5.05k • 1DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-Deep-Reasoning-32B-GGUF
Text Generation • Updated • 773 • 4DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-GGUF
Text Generation • Updated • 329 • 1DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-COGITO-Deep-Reasoning-32B-GGUF
Text Generation • Updated • 1.11k • 2DavidAU/Qwen3-0.6B-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 294DavidAU/Qwen3-0.6B-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 251DavidAU/Qwen3-1.7B-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 477 • 1DavidAU/Qwen3-1.7B-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 438 • 1DavidAU/Qwen3-4B-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 623DavidAU/Qwen3-4B-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 418 • 1DavidAU/Qwen3-8B-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 721DavidAU/Qwen3-8B-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 262DavidAU/Qwen3-4B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • Updated • 111DavidAU/Qwen3-14B-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 685 • 1DavidAU/Qwen3-14B-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 396DavidAU/Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • Updated • 91DavidAU/Qwen3-4B-Mishima-Imatrix-GGUF
Text Generation • UpdatedDavidAU/Qwen3-32B-128k-HORROR-Imatrix-Max-GGUF
Text Generation • Updated • 1DavidAU/Qwen3-32B-128k-NEO-Imatrix-Max-GGUF
Text Generation • Updated • 1