🦷 doctor-dental-implant-LoRA-llama3.2-3B
This is a LoRA adapter trained on top of meta-llama/Llama-3.2-3B
using Unsloth, for the purpose of aligning the model to doctor–patient conversations and dental implant-related Q&A.
The adapter improves the model's performance in instruction-following and medical dialogue within the dental implant domain (e.g. Straumann® surgical workflows).
🔧 Model Details
- Base model:
meta-llama/Llama-3.2-3B
- Adapter type: LoRA via PEFT
- Framework: Unsloth
- Quantization for training: QLoRA (bnb 4-bit)
- Training objective: Instruction-tuning on domain-specific dialogue
- Dataset:
BirdieByte1024/doctor-dental-llama-qa
🧠 Dataset
BirdieByte1024/doctor-dental-llama-qa
- Includes synthetic doctor–patient chat covering:
- Straumann® dental implant systems
- Guided surgery workflows
- General clinical Q&A
💬 Expected Prompt Format
{
"conversation": [
{ "from": "patient", "value": "What is the purpose of a healing abutment?" },
{ "from": "doctor", "value": "It helps shape the gum tissue and protect the implant site during healing." }
]
}
💻 How to Use the Adapter
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
base = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base, "BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B")
✅ Intended Use
- Domain adaptation for dental and clinical chatbots
- Offline inference for healthcare-specific assistants
- Safe instruction-following aligned with patient communication
⚠️ Limitations
- Not a diagnostic tool
- May hallucinate or oversimplify
- Based on non-clinical and synthetic data
🛠 Authors
Developed by (BirdieByte1024)
Fine-tuned using Unsloth and PEFT
📜 License
MIT
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for BirdieByte1024/doctor-dental-implant-LoRA-llama3.2-3B
Base model
meta-llama/Llama-3.2-3B