Qwen2.5-1.5B LoRA Adapter β Dental Domain
This is a LoRA adapter fine-tuned on a dental instruction-following task using the ADA Dental Code dataset. It helps Qwen2.5-1.5B better explain dental procedure codes in plain English.
Model Details
- Base model: Qwen/Qwen2.5-1.5B
- Architecture: Causal language model with LoRA
- Adapter type: PEFT (LoRA)
- Language: English (dental/healthcare domain)
- Dataset: TachyHealth/ADA_Dental_Code_to_SBS_V2
- Precision: float32
- Trained on: consumer GPU (GTX 1060, 6GB)
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B", trust_remote_code=True)
model = PeftModel.from_pretrained(base, "BirdieByte1024/Qwen2.5-1.5B-LoRA-dental")
prompt = """### Instruction:
Explain the following dental code.
### Code:
D7140 - Extraction, erupted tooth
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- Requires the base model
Qwen/Qwen2.5-1.5B
to function - Does not work independently
- Focused on ADA-style dental codes; generalization to other fields is untested
License
Same license as the base model: Apache 2.0
- Downloads last month
- 29
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for BirdieByte1024/Qwen2.5-1.5B-LoRA-dental
Base model
Qwen/Qwen2.5-1.5B