π§ Finetuned BERT for Mental Health Text Classification
This model is a fine-tuned version of bert-base-uncased
for detecting mental health-related categories from textual content. It classifies user-generated posts into five categories:
- π¦ Depression
- π¨ Anxiety
- π΄ Suicidal
- π© Addiction
- πͺ Eating Disorder
It is designed to support research, digital therapy tools, and emotion-aware systems.
π Model Details
- Base model:
bert-base-uncased
- Fine-tuned on: A custom dataset derived from multiple Kaggle sources
- Classification type: Single-label (one of five categories)
- Loss function: Cross-entropy
- Framework: PyTorch, Hugging Face Transformers
π§ͺ Dataset
The dataset used for training and testing was compiled from multiple Kaggle sources involving real-world discussions related to mental health. It contains posts categorized into the five emotion/mental-health topics.
- Training samples were selected from five original CSV files and combined into a single file.
- Testing data was kept separate and sourced similarly.
π¦ You can find the dataset here: Noobie314/mental-health-posts-dataset
π οΈ How to Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Noobie314/finetuned-emotion-bert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "I'm feeling hopeless and tired of everything..."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=1).item()
π Evaluation
The model was evaluated on a test set of 10,000 rows with the following results:
Category | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
Addiction | 0.914 | 0.911 | 0.912 | 1000 |
Anxiety | 0.831 | 0.808 | 0.819 | 1996 |
Depression | 0.772 | 0.621 | 0.688 | 3990 |
Eating Disorder | 0.916 | 0.921 | 0.919 | 1000 |
Suicidal | 0.531 | 0.752 | 0.622 | 1994 |
Accuracy | 0.744 | |||
Macro avg | 0.793 | 0.803 | 0.792 | 9980 |
Weighted avg | 0.764 | 0.744 | 0.747 | 9980 |
- Accuracy: 74.36%
- Macro Average F1-Score: 0.792
- Weighted Average F1-Score: 0.747
These results indicate the model performs relatively well, with higher precision and recall for categories like Eating Disorder and Addiction.
β Intended Uses
This model is intended for:
- NLP research in mental health domains
- Emotion-aware digital assistants
- Content moderation tools focused on emotional tone and risk
β οΈ Disclaimer: This model is not suitable for clinical or medical decision-making. It does not replace licensed mental health professionals.
π License
This model is licensed under the Apache 2.0 License.
π¬ For questions, suggestions, or collaborations, feel free to open an issue or contact via the Hugging Face Hub.
- Downloads last month
- 1
Model tree for Noobie314/finetuned-emotion-bert
Base model
nateraw/bert-base-uncased-emotion