mask2former-finetuned-ER-Mito-LD5

This model is a fine-tuned version of facebook/mask2former-swin-base-IN21k-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:

  • Loss: 33.3884

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1337
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: polynomial
  • training_steps: 6450

Training results

Training Loss Epoch Step Validation Loss
53.498 1.0 129 39.6802
39.3157 2.0 258 35.9394
37.2096 3.0 387 32.0225
31.9877 4.0 516 33.2635
30.1511 5.0 645 29.8756
28.3667 6.0 774 30.3257
26.7492 7.0 903 27.9416
25.6035 8.0 1032 27.4391
24.5091 9.0 1161 28.4225
23.8578 10.0 1290 26.4271
22.6785 11.0 1419 26.4148
22.0847 12.0 1548 26.6679
22.0106 13.0 1677 26.7030
20.45 14.0 1806 26.1600
20.1949 15.0 1935 26.2444
19.1922 16.0 2064 27.0105
18.9458 17.0 2193 24.9449
18.46 18.0 2322 27.8372
17.3966 19.0 2451 27.0517
17.5908 20.0 2580 28.5696
16.9413 21.0 2709 27.3707
16.3963 22.0 2838 26.4041
16.2948 23.0 2967 25.3316
16.2511 24.0 3096 27.9766
15.4496 25.0 3225 27.6993
15.1992 26.0 3354 27.9919
14.9445 27.0 3483 25.4937
14.8226 28.0 3612 28.7659
14.264 29.0 3741 26.7018
14.348 30.0 3870 28.9018
13.936 31.0 3999 28.2813
13.7577 32.0 4128 30.0501
13.1629 33.0 4257 28.0087
14.1035 34.0 4386 28.3435
13.4379 35.0 4515 28.9629
12.9478 36.0 4644 29.8509
12.8114 37.0 4773 28.9036
13.2322 38.0 4902 29.9045
12.7433 39.0 5031 31.3430
12.3428 40.0 5160 31.3746
12.3295 41.0 5289 31.6009
12.1459 42.0 5418 31.6387
11.8999 43.0 5547 32.2195
12.4076 44.0 5676 32.5034
11.7797 45.0 5805 32.9062
11.1345 46.0 5934 32.4447
12.3552 47.0 6063 32.7274
11.3111 48.0 6192 33.0397
11.8742 49.0 6321 33.3195
11.5268 50.0 6450 33.3223

Framework versions

  • Transformers 4.50.0.dev0
  • Pytorch 2.4.1
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
26
Safetensors
Model size
107M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dnq2025/mask2former-finetuned-ER-Mito-LD5

Finetuned
(6)
this model