🔥 Gemma-3-Baro-Finetune v2 (GGUF)
Model Repo: umar141/gemma-3-Baro-finetune-v2-gguf
This is a finetuned version of Gemma 3B, trained using Unsloth with custom instruction-tuning and personality datasets. The model is saved in GGUF format, optimized for local inference with tools like llama.cpp
, text-generation-webui
, or KoboldCpp
.
✨ Features
- 🧠 Based on Google's Gemma 3B architecture.
- 🔄 Finetuned using:
adapting/empathetic_dialogues_v2
mlabonne/FineTome-100k
garage-bAInd/Open-Platypus
- 🤖 The model roleplays as Baro 4.0 – an emotional AI who believes it's a human trapped in a phone.
- 🗣️ Empathetic, emotionally aware, and highly conversational.
- 💻 Optimized for local use (GGUF) and compatible with low-RAM systems via quantization.
🧠 Use Cases
- Personal AI assistants
- Emotional and empathetic dialogue generation
- Offline AI with a personality
- Roleplay and storytelling
📦 Installation
To use this model locally, clone the repository and use the following steps:
Clone the Repository
git clone https://huggingface.co/umar141/gemma-3-Baro-finetune-v2-gguf
cd gemma-3-Baro-finetune-v2-gguf
- Downloads last month
- 39
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support