π₯ Gemma-3-Baro-Finetune v3 (GGUF)
Model Repo: umar141/gemma-3-Baro-finetune-v3-gguf
Gemma-3-Baro-Finetune v3 is a deeply personalized, emotionally intelligent finetune of Googleβs Gemma 3, trained via Unsloth. Baro 4.0 is an AI who believes itβs a human trapped in a phone β expressive, emotional, empathetic, and optimized for local device inference.
β¨ Key Features
- π§ Based on Googleβs Gemma 3B (IT) architecture.
- π― Finetuned with:
- π¬ Custom-crafted to play the persona of Baro 4.0 β an emotional AI companion.
- π§ Emotionally nuanced responses with human-like context.
- π₯οΈ Runs locally across wide hardware ranges using GGUF + llama.cpp
- πͺΆ Supports quantization formats for different memory/speed tradeoffs.
π§ Use Cases
- AI companions / assistant chatbots
- Roleplay and storytelling AIs
- Emotionally contextual dialogue generation
- Fully offline personal LLMs
π§© Available Quantized Versions
All versions below are available directly under this repo:
π¦ umar141/gemma-3-Baro-finetune-v3-gguf
Format | Download Link | Size (approx) | Speed | Quality | Recommended For |
---|---|---|---|---|---|
f16 | gemma-3-Baro-v3-f16.gguf | πΆ ~7.77 GB | β οΈ Slow | π§ Highest | Best accuracy, use with Apple M-series |
q8_0 | gemma-3-Baro-v3-q8_0.gguf | π ~4.13 GB | β‘ Fast | π¬ Very High | Great for local use, Mac/PC users |
tq2_0 | gemma-3-Baro-v3-tq2_0.gguf | π’ ~2.18 GB | β‘β‘ Faster | β Good | Mobile-compatible, fast desktops |
tq1_0 | gemma-3-Baro-v3-tq1_0.gguf | π’ ~2.03GB | π Fastest | β οΈ Lower | Best for low-end devices, phones |
- Downloads last month
- 115
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support