RustBustersHSV-Llama-3.2-3B-Instruct-LoRA
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct optimized for laser cleaning customer service interactions. It was developed for RustBustersHSV, a laser cleaning and resurfacing company in Huntsville, Alabama.
Model Details
- Model type: Fine-tuned Llama-3.2-3B-Instruct with LoRA
- Language(s): English
- License: Llama 3 Community License
- Finetuning approach: Parameter-efficient fine-tuning with Low-Rank Adaptation (LoRA)
Intended Uses & Limitations
Intended Uses
This model is designed to:
- Answer customer inquiries about laser cleaning services
- Provide detailed information about RustBustersHSV's services
- Help customers understand the laser cleaning process
- Address common concerns and objections
- Guide customers toward requesting a free quote
Limitations
This model:
- Is not designed to provide specific pricing information
- Should not be used for non-laser cleaning domains without further adaptation
- Is limited to English language responses
- May not have expertise in very technical aspects beyond its training data
- Should be monitored when deployed in a customer-facing environment
Training Procedure
Training Data
The model was fine-tuned on 3,000 synthetic QA pairs categorized into:
- General inquiries about laser cleaning
- Service-specific questions
- Logistics and location information
- Process details
- Concerns and objections
- Customer experience
- Technical aspects
All QA pairs were generated using templates and variations designed to mimic real customer service interactions for a laser cleaning business.
Training Hyperparameters
LoRA Configuration:
- r: 8
- lora_alpha: 16
- lora_dropout: 0.1
- bias: "none"
- target_modules: ["q_proj", "v_proj"]
- task_type: "CAUSAL_LM"
Training Hyperparameters:
- Batch size: 1
- Learning rate: 2e-5
- Optimizer: AdamW
- Sequence length: 128
- Epochs: 3
- Warmup ratio: 0.1
- Early stopping patience: 3
Framework Versions
- Transformers 4.38.0+
- PyTorch 2.0+
- PEFT for LoRA fine-tuning
Uses
This model is intended to be used as a customer service assistant for a laser cleaning business. It can be integrated into:
- Live chat on a company website
- Customer inquiry response systems
- Internal knowledge base for employees
- Training materials for new customer service representatives
Bias, Risks, and Limitations
The model is specialized for laser cleaning customer service and may:
- Emphasize the benefits of laser cleaning over alternative methods
- Always attempt to guide customers toward requesting quotes
- Have limited knowledge outside the laser cleaning domain
- Not understand or respond accurately to highly technical queries outside its training
Training Performance
The model was trained using the AdamW optimizer with a linear learning rate scheduler and warmup. Early stopping was used to prevent overfitting.
Environmental Impact
- The model was fine-tuned using parameter-efficient LoRA techniques to minimize computational resources
- Training was performed on TPU to maximize efficiency
How to Use
You can use this model with the Transformers pipeline:
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
model_name = "meta-llama/Llama-3.2-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Load adapter
adapter_path = "RustBustersHSV/Llama-3.2-3B-Instruct-RustBusters"
model = PeftModel.from_pretrained(model, adapter_path)
# Format your prompt appropriately
system_prompt = """You are Lloyd, the first point of contact for customers of Rustbusters. Please be warm and friendly and offer actionable information. Rustbusters is a laser cleaning company that specializes in removing rust, paint, and other contaminants using advanced laser technology. Our services include industrial cleaning, restoration, paint removal, and surface preparation."""
user_prompt = "What is laser cleaning and how does it work?"
prompt = f"<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{user_prompt}<|im_end|>\n<|im_start|>assistant\n"
# Generate response
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7, top_p=0.9)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Community and Contributions
This model is maintained by RustBustersHSV. For questions or issues, please contact [contact information].
Citation
If you use this model in research, please cite:
@misc{rustbustersllama32,
author = {RustBustersHSV},
title = {RustBustersHSV-Llama-3.2-3B-Instruct-LoRA},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face model repository},
howpublished = {\url{https://huggingface.co/RustBustersHSV/Llama-3.2-3B-Instruct-RustBusters}}
}
- Downloads last month
- 21
Model tree for Dudeman523/Llama-3.2-3b-Instruct-RustBusters
Base model
meta-llama/Llama-3.2-3B-Instruct