DistilBERT for Depression Detection
This model is a fine-tuned version of distilbert-base-uncased
for binary depression classification based on Reddit and mental health-related posts.
π Training Details
- Base model: distilbert-base-uncased
- Epochs: 3
- Batch size: 8 (train), 16 (eval)
- Optimizer: AdamW with weight decay
- Loss function: CrossEntropyLoss
- Hardware: Trained using GPU acceleration
π§Ύ Datasets Used
The datasets were cleaned to remove rows with missing text
, labels were binarized (0 = not depressed, 1 = depressed), and duplicates were removed.
π§ͺ Evaluation
Metric | Value |
---|---|
Loss | 0.0631 |
Samples/sec | 85.56 |
Steps/sec | 5.35 |
π Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model = AutoModelForSequenceClassification.from_pretrained("your-username/depression-detection-model")
tokenizer = AutoTokenizer.from_pretrained("your-username/depression-detection-model")
inputs = tokenizer("I feel sad and hopeless", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class = torch.argmax(logits).item()
print("Prediction:", predicted_class)
- Downloads last month
- 21
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Dataset used to train TRT1000/depression-detection-model
Evaluation results
- Evaluation Lossself-reported0.063