Model Card for Model ID
🧠 Puck Peronalized Bot
Model Details
A TinyLlama-based personalized conversational model trained on 5,000+ samples of English and Roman Urdu messages by Zain Yasir, reflecting his unique tone, knowledge, beliefs, and friend circle. Designed to power a private AI assistant named Puck.
Model Description
🧩 Model Description This is a 1.1B-parameter TinyLlama model fine-tuned using LoRA (4-bit) on personal, technical, religious, and conversational data. It understands (English) text and is tailored to mimic natural, reflective, and casual conversations based on the user’s own messaging history.
- License: MIT
- **Finetuned from model : 5,000+ messages, custom instruction-response format
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
- Chatbot for personal productivity, task planning, and faith-aligned reminders.
- Assisting in small talk, Q&A, and self-reflective prompts.
- Custom assistants (e.g., Puck on local apps or APIs).
Downstream Use [optional]
- Can be extended with RAG for dynamic factual recall.
- Useful as a base for personalized LLM agents or lightweight voice assistants.
Out-of-Scope Use
- Not for production-scale systems (use larger models instead).
- Not suitable for sensitive decision-making or medical/legal advice.
Training Details
📚 Training Data
- The model was trained on a curated dataset including:
- 600+ facts about Zain and friends (Q&A format × paraphrased)
- 500+ general conversations (e.g., daily routine, habits)
- 200+ tech/personal Q&A (projects, skills, tools)
- 3,700+ random Roman Urdu + English chats (faith, Pakistan, jokes, thoughts)
⚙️ Hyperparameters
- Epochs: 3
- Batch size: 4 × 4 (with gradient accumulation)
- LR: 2e-4
- Precision: FP16
- LoRA config: r=8, alpha=16, target: q_proj, v_proj
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 2× NVIDIA T4
- Hours used: ~2.5
- Cloud Provider: Kaggle (Google Cloud)
- Compute Region: Pakistan
- Carbon Emitted: ~0.25 kg CO2e
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ZainYasir/Puck-Perosnalized-bot
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0