DistilBERT for Depression Detection

This model is a fine-tuned version of distilbert-base-uncased for binary depression classification based on Reddit and mental health-related posts.

πŸ“Š Training Details

  • Base model: distilbert-base-uncased
  • Epochs: 3
  • Batch size: 8 (train), 16 (eval)
  • Optimizer: AdamW with weight decay
  • Loss function: CrossEntropyLoss
  • Hardware: Trained using GPU acceleration

🧾 Datasets Used

The datasets were cleaned to remove rows with missing text, labels were binarized (0 = not depressed, 1 = depressed), and duplicates were removed.

πŸ§ͺ Evaluation

Metric Value
Loss 0.0631
Samples/sec 85.56
Steps/sec 5.35

πŸš€ Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model = AutoModelForSequenceClassification.from_pretrained("your-username/depression-detection-model")
tokenizer = AutoTokenizer.from_pretrained("your-username/depression-detection-model")

inputs = tokenizer("I feel sad and hopeless", return_tensors="pt")
with torch.no_grad():
    logits = model(**inputs).logits
    predicted_class = torch.argmax(logits).item()

print("Prediction:", predicted_class)
Downloads last month
21
Safetensors
Model size
67M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train TRT1000/depression-detection-model