Noobie314 commited on
Commit
7922bbf
Β·
verified Β·
1 Parent(s): 51515a8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Noobie314/mental-health-posts-dataset
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ base_model:
12
+ - mental/mental-roberta-base
13
+ pipeline_tag: text-classification
14
+ library_name: transformers
15
+ tags:
16
+ - RoBERTa
17
+ - mental-health-nlp
18
+ - emotion-classification
19
+ - mental-health-therapy
20
+ - fine-tuned-mental-health
21
+ ---
22
+ # 🧠 Finetuned RoBERTa for Mental Health Text Classification
23
+
24
+ This model is a fine-tuned version of `roberta-base` for detecting mental health-related categories from textual content. It classifies user-generated posts into **five categories**:
25
+
26
+ - 🟦 Depression
27
+ - 🟨 Anxiety
28
+ - πŸ”΄ Suicidal
29
+ - 🟩 Addiction
30
+ - πŸŸͺ Eating Disorder
31
+
32
+ It is designed to support research, digital therapy tools, and emotion-aware systems.
33
+
34
+ ## πŸ“ Model Details
35
+
36
+ - **Base model**: `mental-roberta-base`
37
+ - **Fine-tuned on**: Custom Kaggle-aggregated dataset of mental health-related posts
38
+ - **Output**: Single-label classification (one of the five categories)
39
+ - **Loss function**: Cross-entropy
40
+ - **Format**: PyTorch model with Hugging Face Transformers compatibility
41
+
42
+ ## πŸ§ͺ Dataset
43
+
44
+ The dataset used for training and testing was compiled from multiple Kaggle sources involving real-world discussions related to mental health. It contains posts categorized into the five emotion/mental-health topics.
45
+
46
+ - Training samples were selected from five original CSV files and combined into a single file.
47
+ - Testing data was kept separate and sourced similarly.
48
+
49
+ > πŸ“¦ **You can find the dataset here**: [Noobie314/mental-health-posts-dataset](https://huggingface.co/datasets/Noobie314/mental-health-posts-dataset)
50
+
51
+ ## πŸ› οΈ How to Use
52
+
53
+ ```python
54
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
55
+
56
+ model_name = "Noobie314/finetuned-roberta-mental-health"
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
59
+
60
+ text = "I'm feeling hopeless and tired of everything..."
61
+ inputs = tokenizer(text, return_tensors="pt")
62
+ outputs = model(**inputs)
63
+
64
+ # Get predicted label
65
+ predicted_class = outputs.logits.argmax(dim=1).item()
66
+ ```
67
+
68
+ ## πŸ“Š Evaluation
69
+
70
+ The model was evaluated on a held-out test set with standard metrics:
71
+
72
+ - **Accuracy**: 78.32%
73
+ - **F1 Score (macro)**: 82.22%
74
+ - **Precision & Recall**: Reported per class
75
+
76
+ | Category | Precision | Recall | F1-Score | Support |
77
+ |-----------------|------------|-----------|-----------|---------|
78
+ | Addiction | 94.62% | 91.40% | 92.98% | 1000 |
79
+ | Anxiety | 88.19% | 82.31% | 85.15% | 1996 |
80
+ | Depression | 77.13% | 72.86% | 74.93% | 3990 |
81
+ | Eating Disorder | 92.77% | 93.60% | 93.18% | 1000 |
82
+ | Suicidal | 59.67% | 71.01% | 64.85% | 1994 |
83
+
84
+ ## βœ… Intended Uses
85
+
86
+ This model is intended for:
87
+
88
+ - Research on mental health-related NLP
89
+ - Emotion-aware content moderation
90
+ - Digital therapy assistants
91
+
92
+ > ⚠️ **Disclaimer**: This model is not intended for medical diagnosis or treatment. It should not be used as a substitute for professional mental health support.
93
+
94
+ ## πŸ“œ License
95
+
96
+ This project is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
97
+
98
+ ---
99
+
100
+ πŸ“¬ For questions or collaborations, feel free to reach out through the Hugging Face hub.
101
+
102
+ ---