--- library_name: transformers license: mit base_model: FacebookAI/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-go_emotions results: [] datasets: - google-research-datasets/go_emotions pipeline_tag: text-classification --- # roberta-base-go_emotions This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset. It achieves the following results on the validation set: - Loss: 0.1086 - Accuracy: 0.4561 - Roc Auc: 0.9064 - Micro Precision: 0.6063 - Micro Recall: 0.5340 - Micro F1: 0.5679 - Macro Precision: 0.5800 - Macro Recall: 0.4344 - Macro F1: 0.4649 - Weighted Precision: 0.5994 - Weighted Recall: 0.5340 - Weighted F1: 0.5591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Roc Auc | Micro Precision | Micro Recall | Micro F1 | Macro Precision | Macro Recall | Macro F1 | Weighted Precision | Weighted Recall | Weighted F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 0.1047 | 1.0 | 5427 | 0.0973 | 0.3616 | 0.8668 | 0.7390 | 0.3710 | 0.4940 | 0.3548 | 0.1954 | 0.2192 | 0.5670 | 0.3710 | 0.4098 | | 0.09 | 2.0 | 10854 | 0.0876 | 0.4195 | 0.9037 | 0.7497 | 0.4276 | 0.5446 | 0.5715 | 0.2731 | 0.3243 | 0.6961 | 0.4276 | 0.4875 | | 0.0821 | 3.0 | 16281 | 0.0850 | 0.4477 | 0.9137 | 0.7294 | 0.4627 | 0.5662 | 0.5692 | 0.3174 | 0.3799 | 0.6893 | 0.4627 | 0.5258 | | 0.0774 | 4.0 | 21708 | 0.0851 | 0.4591 | 0.9178 | 0.6930 | 0.4876 | 0.5725 | 0.5768 | 0.3765 | 0.4273 | 0.6745 | 0.4876 | 0.5435 | | 0.0736 | 5.0 | 27135 | 0.0856 | 0.4657 | 0.9208 | 0.6844 | 0.4989 | 0.5771 | 0.5741 | 0.3909 | 0.4448 | 0.6715 | 0.4989 | 0.5557 | | 0.0714 | 6.0 | 32562 | 0.0866 | 0.4619 | 0.9171 | 0.6674 | 0.4991 | 0.5711 | 0.5593 | 0.3845 | 0.4386 | 0.6529 | 0.4991 | 0.5515 | | 0.0673 | 7.0 | 37989 | 0.0883 | 0.4607 | 0.9209 | 0.6585 | 0.5038 | 0.5708 | 0.5197 | 0.4151 | 0.4522 | 0.6417 | 0.5038 | 0.5539 | | 0.0604 | 8.0 | 43416 | 0.0902 | 0.4773 | 0.9171 | 0.6530 | 0.5252 | 0.5822 | 0.5623 | 0.4192 | 0.4629 | 0.6316 | 0.5252 | 0.5646 | | 0.0593 | 9.0 | 48843 | 0.0926 | 0.4714 | 0.9165 | 0.6319 | 0.5263 | 0.5743 | 0.5850 | 0.4208 | 0.4612 | 0.6235 | 0.5263 | 0.5625 | | 0.0557 | 10.0 | 54270 | 0.0959 | 0.4639 | 0.9155 | 0.6319 | 0.5229 | 0.5723 | 0.5710 | 0.4340 | 0.4705 | 0.6227 | 0.5229 | 0.5602 | | 0.0512 | 11.0 | 59697 | 0.0985 | 0.4631 | 0.9147 | 0.6203 | 0.5266 | 0.5696 | 0.5656 | 0.4470 | 0.4754 | 0.6162 | 0.5266 | 0.5605 | | 0.0478 | 12.0 | 65124 | 0.1013 | 0.4644 | 0.9116 | 0.6191 | 0.5279 | 0.5699 | 0.5588 | 0.4426 | 0.4776 | 0.6159 | 0.5279 | 0.5607 | | 0.0449 | 13.0 | 70551 | 0.1036 | 0.4696 | 0.9080 | 0.6188 | 0.5354 | 0.5741 | 0.5594 | 0.4395 | 0.4729 | 0.6073 | 0.5354 | 0.5618 | | 0.042 | 14.0 | 75978 | 0.1055 | 0.4700 | 0.9071 | 0.6131 | 0.5409 | 0.5747 | 0.5761 | 0.4399 | 0.4698 | 0.6013 | 0.5409 | 0.5638 | | 0.0392 | 15.0 | 81405 | 0.1086 | 0.4561 | 0.9064 | 0.6063 | 0.5340 | 0.5679 | 0.5800 | 0.4344 | 0.4649 | 0.5994 | 0.5340 | 0.5591 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.20.3