Fine-tuned Llama 3.1 8B Mental Status Classifier
This model is a fine-tuned version of the Meta Llama 3.1 8B Instruct model, specialized for mental health status classification.
Training Details
- Base Model: Meta-Llama-3.1-8B-Instruct
- Fine-tuning Method: QLoRA with Unsloth optimization
- Parameters:
- LoRA rank: 16
- LoRA alpha: 16
- Batch size: 4
- Learning rate: 2e-4
- Epochs: 1
Intended Use
This model is fine-tuned on the Kaggle Dataset on mental health sentiment analysis to predict a user's mental health status from their input text.
The model classifies input text into one of 3 categories:
- Normal
- Depression
- Anxiety
This was developed as part of a larger project, creating a classification system to determine students' mental health status.
This was developed for SC1015 by Chia Dion Yi.
Installation Requirements
pip install transformers>=4.35.0 bitsandbytes>=0.41.0 accelerate>=0.26.0 torch>=2.0.0
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_name = "fiendfrye/mental-status-classifier-lama-3.1-8b-fine-tuned"
# Load model with proper quantization settings
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True, # Required for this model
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Text to classify
text = "I'm trapped in a storm of emotions that I can't control, and it feels like no one understands the chaos inside me"
# Create complete prompt
prompt = f""Classify the text into Normal, Depression, Anxiety, and return the answer as the corresponding mental health disorder label.
text: {text}
label: ""
# Use pipeline for text generation
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
)
outputs = pipe(prompt, max_new_tokens=2, do_sample=True, temperature=0.1)
print(outputs[0]["generated_text"].split("label: ")[-1].strip())
Hardware Requirements
- This model requires a GPU with at least 8GB of VRAM when using 4-bit quantization
- For inference with full precision, at least 16GB of VRAM is recommended
Expected Outputs
The model will return one of the following classification labels:
- Normal
- Depression
- Anxiety
Alternative Loading Method
If you encounter issues with the pipeline approach, you can try direct model inference:
# Text to classify
text = "I'm trapped in a storm of emotions that I can't control, and it feels like no one understands the chaos inside me"
# Create prompt
prompt = f""Classify the text into Normal, Depression, Anxiety, and return the answer as the corresponding mental health disorder label.
text: {text}
label: ""
# Generate with model directly
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=2,
do_sample=True,
temperature=0.1
)
result = tokenizer.decode(output[0], skip_special_tokens=True)
print(result.split("label: ")[-1].strip())
Limitations
- This model is intended for educational and research purposes only
- IMPORTANT DISCLAIMER: This model should NOT be used for clinical diagnosis or as a substitute for professional mental health assessment
- Performance may vary based on how text is phrased and culturally specific expressions of mental health
- The model has been trained on limited datasets and may not generalize well to all populations or expressions of mental health concerns
- Downloads last month
- 45
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support