aisyahhrazak's picture
Update README.md
0fd40e2 verified
|
raw
history blame
1.88 kB
metadata
language:
  - ms
library_name: transformers

Safe for Work Classifier Model for Malaysian Data

Current version supports Malay. We are working towards supporting malay, english and indo.

Base Model finetuned from https://huggingface.co/mesolitica/malaysian-mistral-191M-MLM-512 with Malaysian NSFW data.

Data Source: https://huggingface.co/datasets/malaysia-ai/Malaysian-NSFW

Github Repo: https://github.com/malaysia-ai/sfw-classifier

Project Board: https://github.com/orgs/malaysia-ai/projects/6

Image in a markdown cell

Current Labels Available:

  • religion insult
  • sexist
  • racist
  • psychiatric or mental illness
  • harassment
  • safe for work
  • porn
  • self-harm
  • violence

How to use

from classifier import MistralForSequenceClassification
model = MistralForSequenceClassification.from_pretrained('malaysia-ai/malaysian-sfw-classifier')
                               precision    recall  f1-score   support

                       racist    0.85034   0.90307   0.87591      1661
              religion insult    0.86809   0.85966   0.86386      3399
psychiatric or mental illness    0.94707   0.84181   0.89134      5803
                       sexist    0.80637   0.78992   0.79806      1666
                   harassment    0.83653   0.88663   0.86085       935
                         porn    0.96709   0.95341   0.96020      1202
                safe for work    0.80132   0.91232   0.85322      3205
                    self-harm    0.85751   0.90934   0.88267       364
                     violence    0.77521   0.84049   0.80653      1235

                     accuracy                        0.86754     19470
                    macro avg    0.85661   0.87741   0.86585     19470
                 weighted avg    0.87235   0.86754   0.86822     19470