---
language: en
license: apache-2.0
tags:
- roleplay
- conversational
- lora
- gguf
- qwen
- unsloth
- fine-tuning
- trl
- colab
- 4bit
datasets:
- PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split
pipeline_tag: text-generation
library_name: transformers
model-index:
- name: Qwen3-4B Roleplay LoRA by chun121
results: []
model-name: Qwen3-4B-Roleplay-LoRA
model-type: LoRA fine-tuned
base-model: Qwen/Qwen3-4B
datasets:
- PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split
language:
- en
license: apache-2.0
developer: "Chun"
---
# 🧙♂️ Qwen3-4B Roleplay LoRA
### *Where Characters Come Alive in Conversation*
Breathe life into your digital companions with natural, engaging dialogue
## ✨ Model Overview
Welcome, fellow creators! I'm Chun (@chun121), and I've fine-tuned the impressive [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) model to excel at character-based conversations and roleplay scenarios. Whether you're crafting an immersive game, building an interactive storytelling platform, or developing character-driven AI experiences, this model will help your characters speak with personality, consistency, and depth.
This LoRA adaptation maintains the intelligence of the base model while enhancing its ability to:
- 🎭 Maintain consistent character personas
- 💬 Generate authentic dialogue that reflects character traits
- 🌍 Create immersive narrative responses
- 🧠 Remember context throughout conversations
## 📊 Technical Specifications
| Feature | Details |
|---------|---------|
| **Base Model** | [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) |
| **Architecture** | Transformer-based LLM with LoRA adaptation |
| **Parameter Count** | 4 Billion (Base) + LoRA parameters |
| **Quantization Options** | 4-bit (bnb), GGUF formats (Q8_0, F16, Q4_K_M) |
| **Training Framework** | [Unsloth](https://github.com/unslothai/unsloth) & [TRL](https://github.com/huggingface/trl) |
| **Context Length** | 512 tokens |
| **Developer** | [Chun](https://huggingface.co/chun121) |
| **License** | Apache 2.0 |
## 🧠 Training Methodology
This LoRA was trained on a free Google Colab T4 GPU using efficient quantization techniques to maximize the limited resources:
- **Dataset**: [PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split](https://huggingface.co/datasets/PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split)
- **LoRA Configuration**:
- Rank: 16
- Alpha: 32
- Target Modules: Optimized for character dialogue generation
- **Training Hyperparameters**:
- Batch Size: 8
- Gradient Accumulation Steps: 4
- Learning Rate: 1e-4 with cosine scheduler
- Max Steps: 200
- Precision: FP16/BF16 (auto-detected)
- Packing: Enabled for efficient training
- QLoRA: 4-bit quantization via bitsandbytes
## 📚 Dataset Deep Dive
The [Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split](https://huggingface.co/datasets/PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split) dataset is a rich collection of character interactions featuring:
- Diverse character archetypes across different genres
- Multi-turn conversations that maintain character consistency
- Varied emotional contexts and scenarios
- Rich descriptive language and character-driven responses
This carefully curated dataset helps the model understand the nuances of character voices, maintaining consistent personalities while generating engaging responses.
## 🚀 Getting Started
### Hugging Face Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model with 4-bit quantization for efficiency
tokenizer = AutoTokenizer.from_pretrained("chun121/qwen3-4b-roleplay-lora")
model = AutoModelForCausalLM.from_pretrained(
"chun121/qwen3-4b-roleplay-lora",
torch_dtype=torch.float16, # Use float16 for faster inference
device_map="auto" # Automatically choose best device
)
# Create a character-focused prompt
character_prompt = """
Character: Elara, an elven mage with centuries of knowledge but little patience for novices
Setting: The Grand Library of Mystral
Context: A young apprentice has asked for help with a difficult spell
User: Excuse me, I'm having trouble with the fire conjuration spell. Could you help me?
Elara:
"""
# Generate response
inputs = tokenizer(character_prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Using GGUF Models
If you're utilizing the GGUF exports with llama.cpp:
```bash
# Example command for Q4_K_M quantization
./llama -m chun121-qwen3-4b-roleplay-lora.Q4_K_M.gguf -p "Character: Elara, an elven mage..." -n 200
```
## 💡 Recommended Usage
This model works best when:
1. **Providing character context**: Include a brief description of the character's personality, background, and current situation
2. **Setting the scene**: Give context about the environment and circumstances
3. **Using chat format**: Structure inputs as a conversation between User/Human and Character
4. **Maintaining temperature**: Values between 0.7-0.8 offer a good balance of creativity and coherence
## 🔄 Limitations
- Limited to 512 token context window
- May occasionally "forget" character traits in very long conversations
- Training dataset focuses primarily on fantasy/RPG contexts
- As a LoRA fine-tune, inherits limitations of the base Qwen3-4B model
## 🔗 Related Projects
If you enjoy this model, check out these related projects:
- [My other fine-tunes](https://huggingface.co/chun121)
- [The Unsloth optimization library](https://github.com/unslothai/unsloth)
- [PJMixers character datasets](https://huggingface.co/PJMixers-Dev)
## 🙏 Acknowledgements
Special thanks to:
- The Qwen team for their incredible base model
- PJMixers-Dev for the high-quality dataset
- The Unsloth team for making efficient fine-tuning accessible
- The HuggingFace community for their continued support
## 📬 Feedback & Contact
I'd love to hear how this model works for your projects! Feel free to:
- Open an issue on the HuggingFace repo
- Connect with me on HuggingFace [@chun121](https://huggingface.co/chun121)
- Share examples of characters you've created with this model
---
May your characters speak with voices that feel truly alive!
Created with ❤️ by Chun