mask2former-finetuned-ER-Mito-LD6

This model is a fine-tuned version of facebook/mask2former-swin-base-IN21k-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:

  • Loss: 36.6791

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1337
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: polynomial
  • training_steps: 6450

Training results

Training Loss Epoch Step Validation Loss
54.5306 1.0 129 47.6483
42.7396 2.0 258 34.9192
38.1013 3.0 387 31.6591
32.8585 4.0 516 33.2288
30.481 5.0 645 31.1107
29.7583 6.0 774 33.4640
26.7225 7.0 903 28.5952
26.0046 8.0 1032 29.7853
24.8025 9.0 1161 28.1448
24.0444 10.0 1290 29.0327
23.1171 11.0 1419 27.7611
22.0008 12.0 1548 27.4982
22.4006 13.0 1677 28.5804
20.6462 14.0 1806 26.5733
20.1936 15.0 1935 27.0318
19.4983 16.0 2064 26.2896
19.4915 17.0 2193 27.2829
18.8834 18.0 2322 27.4219
17.7948 19.0 2451 26.9262
17.7806 20.0 2580 28.1830
17.3828 21.0 2709 26.5501
16.7812 22.0 2838 27.7953
16.5225 23.0 2967 26.7844
16.6727 24.0 3096 29.3165
15.9375 25.0 3225 29.3433
15.554 26.0 3354 27.7353
15.3216 27.0 3483 28.5868
15.2336 28.0 3612 30.1337
14.4484 29.0 3741 29.4535
14.5668 30.0 3870 29.9552
14.2886 31.0 3999 30.3295
13.9594 32.0 4128 30.9996
13.3464 33.0 4257 29.5446
14.3524 34.0 4386 30.5839
13.7015 35.0 4515 31.6050
13.1693 36.0 4644 30.4525
13.0106 37.0 4773 30.8857
13.4503 38.0 4902 33.0173
12.885 39.0 5031 33.0191
12.4798 40.0 5160 32.4086
12.569 41.0 5289 35.1227
12.2572 42.0 5418 33.3447
12.1342 43.0 5547 34.8180
12.6542 44.0 5676 34.2102
11.9929 45.0 5805 35.5142
11.2777 46.0 5934 36.4062
12.3835 47.0 6063 36.1198
11.4719 48.0 6192 36.6292
12.1422 49.0 6321 36.8263
11.636 50.0 6450 36.5817

Framework versions

  • Transformers 4.50.0.dev0
  • Pytorch 2.4.1
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
20
Safetensors
Model size
107M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dnq2025/mask2former-finetuned-ER-Mito-LD6

Finetuned
(6)
this model