Gemma-3-27B-it-tr-reasoning 路 Q4_K_M (GGUF)
A GGUF Q4_K_M quantization of emre/gemma-3-27b-it-tr-reasoning40k-4bit for ultra-fast, low-RAM local inference with llama.cpp (and compatible back-ends).
No weights were changed beyond quantization; alignment, vocabulary and tokenizer remain intact.
Developed by: emre
Finetuned from model : unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 38
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for Tiotanio/gemma-3-27b-it-tr-reasoning_Q4_K_M
Base model
google/gemma-3-27b-pt
Finetuned
google/gemma-3-27b-it
Quantized
unsloth/gemma-3-27b-it-unsloth-bnb-4bit
Finetuned
emre/gemma-3-27b-it-tr-reasoning40k-4bit