practica_2

This model is a fine-tuned version of hustvl/yolos-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4715

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 17 1.0138
No log 2.0 34 0.9049
1.0786 3.0 51 0.7578
1.0786 4.0 68 0.6868
1.0786 5.0 85 0.6194
0.6333 6.0 102 0.6252
0.6333 7.0 119 0.5737
0.6333 8.0 136 0.5558
0.4516 9.0 153 0.5284
0.4516 10.0 170 0.5306
0.4516 11.0 187 0.5385
0.3783 12.0 204 0.5209
0.3783 13.0 221 0.5044
0.3783 14.0 238 0.5468
0.3213 15.0 255 0.4953
0.3213 16.0 272 0.4678
0.3213 17.0 289 0.4901
0.2909 18.0 306 0.5269
0.2909 19.0 323 0.4885
0.2909 20.0 340 0.4361
0.2532 21.0 357 0.4837
0.2532 22.0 374 0.4971
0.2532 23.0 391 0.4813
0.2312 24.0 408 0.4834
0.2312 25.0 425 0.4834
0.2312 26.0 442 0.4724
0.2075 27.0 459 0.4669
0.2075 28.0 476 0.4541
0.2075 29.0 493 0.4681
0.1792 30.0 510 0.5126
0.1792 31.0 527 0.4681
0.1792 32.0 544 0.4758
0.1717 33.0 561 0.4669
0.1717 34.0 578 0.4921
0.1717 35.0 595 0.4918
0.1669 36.0 612 0.4759
0.1669 37.0 629 0.4758
0.1669 38.0 646 0.4838
0.1614 39.0 663 0.4839
0.1614 40.0 680 0.4564
0.1614 41.0 697 0.4223
0.1492 42.0 714 0.5006
0.1492 43.0 731 0.4495
0.1492 44.0 748 0.4679
0.1374 45.0 765 0.4811
0.1374 46.0 782 0.4657
0.1374 47.0 799 0.4606
0.1326 48.0 816 0.4646
0.1326 49.0 833 0.4896
0.1323 50.0 850 0.4963
0.1323 51.0 867 0.4636
0.1323 52.0 884 0.4806
0.1255 53.0 901 0.4568
0.1255 54.0 918 0.4523
0.1255 55.0 935 0.4607
0.1178 56.0 952 0.4678
0.1178 57.0 969 0.4743
0.1178 58.0 986 0.4830
0.1105 59.0 1003 0.4721
0.1105 60.0 1020 0.5013
0.1105 61.0 1037 0.4657
0.1108 62.0 1054 0.4672
0.1108 63.0 1071 0.4606
0.1108 64.0 1088 0.4321
0.1085 65.0 1105 0.4613
0.1085 66.0 1122 0.4911
0.1085 67.0 1139 0.5074
0.1 68.0 1156 0.4333
0.1 69.0 1173 0.4372
0.1 70.0 1190 0.4237
0.0987 71.0 1207 0.4571
0.0987 72.0 1224 0.4450
0.0987 73.0 1241 0.4535
0.0943 74.0 1258 0.4631
0.0943 75.0 1275 0.4858
0.0943 76.0 1292 0.4881
0.0906 77.0 1309 0.4838
0.0906 78.0 1326 0.4543
0.0906 79.0 1343 0.4522
0.0933 80.0 1360 0.4555
0.0933 81.0 1377 0.4306
0.0933 82.0 1394 0.5012
0.089 83.0 1411 0.4685
0.089 84.0 1428 0.4543
0.089 85.0 1445 0.4630
0.0812 86.0 1462 0.4715
0.0812 87.0 1479 0.4896
0.0812 88.0 1496 0.4587
0.0779 89.0 1513 0.4929
0.0779 90.0 1530 0.4443
0.0779 91.0 1547 0.4598
0.0783 92.0 1564 0.4413
0.0783 93.0 1581 0.4412
0.0783 94.0 1598 0.4456
0.077 95.0 1615 0.5037
0.077 96.0 1632 0.4462
0.077 97.0 1649 0.4611
0.0819 98.0 1666 0.4617
0.0819 99.0 1683 0.4344
0.0765 100.0 1700 0.4715

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
296
Safetensors
Model size
6.47M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for seayala/practica_2

Base model

hustvl/yolos-tiny
Finetuned
(19)
this model