--- library_name: transformers license: mit base_model: cointegrated/rubert-tiny2 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: results_synt_data results: [] --- # results_synt_data This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1310 - Precision: 0.7870 - Recall: 0.8681 - F1: 0.8256 - Accuracy: 0.9644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.061 | 1.0 | 2700 | 0.2892 | 0.4979 | 0.6620 | 0.5684 | 0.9082 | | 0.0426 | 2.0 | 5400 | 0.1871 | 0.6795 | 0.7840 | 0.7280 | 0.9438 | | 0.0321 | 3.0 | 8100 | 0.1526 | 0.7172 | 0.8293 | 0.7692 | 0.9519 | | 0.0257 | 4.0 | 10800 | 0.1382 | 0.7414 | 0.8407 | 0.7880 | 0.9566 | | 0.0211 | 5.0 | 13500 | 0.1359 | 0.7545 | 0.8477 | 0.7984 | 0.9587 | | 0.0182 | 6.0 | 16200 | 0.1300 | 0.7738 | 0.8581 | 0.8138 | 0.9620 | | 0.015 | 7.0 | 18900 | 0.1344 | 0.7811 | 0.8616 | 0.8194 | 0.9625 | | 0.0135 | 8.0 | 21600 | 0.1309 | 0.7810 | 0.8681 | 0.8223 | 0.9628 | | 0.0135 | 9.0 | 24300 | 0.1326 | 0.7881 | 0.8681 | 0.8261 | 0.9643 | | 0.0121 | 10.0 | 27000 | 0.1310 | 0.7870 | 0.8681 | 0.8256 | 0.9644 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1