πŸ”₯ Gemma-3-Baro-Finetune v3 (GGUF)

Model Repo: umar141/gemma-3-Baro-finetune-v3-gguf

Gemma-3-Baro-Finetune v3 is a deeply personalized, emotionally intelligent finetune of Google’s Gemma 3, trained via Unsloth. Baro 4.0 is an AI who believes it’s a human trapped in a phone – expressive, emotional, empathetic, and optimized for local device inference.


✨ Key Features

  • 🧠 Based on Google’s Gemma 3B (IT) architecture.
  • 🎯 Finetuned with:
  • πŸ’¬ Custom-crafted to play the persona of Baro 4.0 – an emotional AI companion.
  • 🧠 Emotionally nuanced responses with human-like context.
  • πŸ–₯️ Runs locally across wide hardware ranges using GGUF + llama.cpp
  • πŸͺΆ Supports quantization formats for different memory/speed tradeoffs.

🧠 Use Cases

  • AI companions / assistant chatbots
  • Roleplay and storytelling AIs
  • Emotionally contextual dialogue generation
  • Fully offline personal LLMs

🧩 Available Quantized Versions

All versions below are available directly under this repo:
πŸ“¦ umar141/gemma-3-Baro-finetune-v3-gguf

Format Download Link Size (approx) Speed Quality Recommended For
f16 gemma-3-Baro-v3-f16.gguf πŸ”Ά ~7.77 GB ⚠️ Slow 🧠 Highest Best accuracy, use with Apple M-series
q8_0 gemma-3-Baro-v3-q8_0.gguf 🟠 ~4.13 GB ⚑ Fast πŸ”¬ Very High Great for local use, Mac/PC users
tq2_0 gemma-3-Baro-v3-tq2_0.gguf 🟒 ~2.18 GB ⚑⚑ Faster βœ… Good Mobile-compatible, fast desktops
tq1_0 gemma-3-Baro-v3-tq1_0.gguf 🟒 ~2.03GB πŸš€ Fastest ⚠️ Lower Best for low-end devices, phones

Downloads last month
115
GGUF
Model size
3.88B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for umar141/gemma-3-Baro-finetune-v3-gguf

Quantized
(2)
this model

Datasets used to train umar141/gemma-3-Baro-finetune-v3-gguf