๐Ÿฆท doctor-dental-implant-llama3.2-3B-full-model

This model is a fine-tuned version of meta-llama/Llama-3.2-3B, trained using the Unsloth framework on a domain-specific instruction dataset focused on medical and dental implant conversations.

The model has been optimized for chat-style reasoning in doctorโ€“patient scenarios, particularly within the domain of Straumannยฎ dental implant systems, as well as general medical question answering.


๐Ÿ” Model Details

  • Base model: meta-llama/Llama-3.2-3B
  • Training framework: Unsloth with LoRA + QLoRA support
  • Training format: Conversational JSON with {"from": "patient"/"doctor", "value": ...} messages
  • Checkpoint format: Full model merged, usable as standard HF or GGUF (Ollama / llama.cpp)
  • Tokenizer: Inherited from base model
  • Model size: 3B parameters (efficient for consumer-grade inference)

๐Ÿ“š Dataset

This model was trained on:

The dataset contains synthetic and handbook-derived doctor-patient conversations focused on:

  • Dental implant systems (e.g. surgical kits, guided procedures)
  • General medical Q&A relevant to clinics and telemedicine
  • Clinical assistant-style instruction-following

๐Ÿ’ฌ Prompt Format

The model expects a chat-style format:

{
  "conversation": [
    { "from": "patient", "value": "What are the advantages of guided implant surgery?" },
    { "from": "doctor", "value": "Guided surgery improves accuracy, safety, and esthetic outcomes." }
  ]
}

โœ… Intended Use

  • Virtual assistants in dental or medical Q&A
  • Instruction-tuned experimentation on health topics
  • Local chatbot agents (Ollama / llama.cpp compatible)

โš ๏ธ Limitations

  • Model is not a medical device or diagnostic tool
  • Hallucinations and factual errors may occur
  • Content was fine-tuned using synthetic and handbook-based sources (not real EMR)

๐Ÿงช Example Prompt

{
  "conversation": [
    { "from": "human", "value": "What should I expect after a Straumann implant surgery?" },
    { "from": "assistant", "value": "[MODEL RESPONSE HERE]" }
  ]
}

๐Ÿ›  Deployment

Local Use with Hugging Face Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")
model = AutoModelForCausalLM.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model")

GGUF / Ollama / llama.cpp

ollama run doctor-dental-llama3.2

If using a local Modelfile, ensure the prompt template matches chat formatting (no Alpaca-style).


โœ๏ธ Author

Created by (BirdieByte1024) as part of a medical AI research project using Unsloth and LLaMA 3.2.


๐Ÿ“œ License

MIT

Downloads last month
38
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model

Quantized
(72)
this model

Dataset used to train BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model