mask2former-finetuned-ER-Mito-LD4

This model is a fine-tuned version of facebook/mask2former-swin-base-IN21k-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:

  • Loss: 33.8611

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1337
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: polynomial
  • training_steps: 6450

Training results

Training Loss Epoch Step Dummy Validation Loss
53.0042 1.0 129 1.0 41.6702
41.5032 2.0 258 1.0 35.3152
37.8318 3.0 387 1.0 33.2929
33.1734 4.0 516 1.0 31.6052
31.0889 5.0 645 1.0 32.0792
30.5091 6.0 774 1.0 29.4252
27.7742 7.0 903 1.0 29.3660
27.1136 8.0 1032 1.0 28.6043
25.1614 9.0 1161 1.0 28.0848
24.7794 10.0 1290 1.0 28.1507
23.636 11.0 1419 1.0 28.3853
22.7494 12.0 1548 1.0 27.2592
22.7129 13.0 1677 1.0 29.8838
21.1747 14.0 1806 1.0 28.1624
20.9589 15.0 1935 1.0 27.9121
20.2591 16.0 2064 1.0 26.6467
20.1436 17.0 2193 1.0 26.9901
19.5047 18.0 2322 1.0 29.2895
18.4257 19.0 2451 1.0 27.0489
18.6316 20.0 2580 1.0 27.3730
18.037 21.0 2709 1.0 28.0853
17.6324 22.0 2838 1.0 26.6344
17.19 23.0 2967 1.0 28.1709
17.5784 24.0 3096 1.0 26.3646
16.3714 25.0 3225 1.0 28.6477
16.2177 26.0 3354 1.0 29.9328
15.8326 27.0 3483 1.0 27.1418
15.7345 28.0 3612 1.0 28.5265
14.918 29.0 3741 1.0 30.8378
15.2316 30.0 3870 1.0 28.5173
14.6576 31.0 3999 1.0 29.0688
14.5837 32.0 4128 1.0 29.7354
13.7819 33.0 4257 1.0 28.6140
14.851 34.0 4386 1.0 30.7131
14.1454 35.0 4515 1.0 29.3673
13.5445 36.0 4644 1.0 30.1412
13.3725 37.0 4773 1.0 29.7489
13.8976 38.0 4902 1.0 32.2482
13.2317 39.0 5031 1.0 33.3837
12.8382 40.0 5160 1.0 31.9261
12.8798 41.0 5289 1.0 31.0644
12.5615 42.0 5418 1.0 32.6052
12.4595 43.0 5547 1.0 32.6710
12.9861 44.0 5676 1.0 32.3271
12.3429 45.0 5805 1.0 33.1802
11.6031 46.0 5934 1.0 33.3981
12.7182 47.0 6063 1.0 33.2806
11.8251 48.0 6192 1.0 33.9491
12.4439 49.0 6321 1.0 33.4338
11.9834 50.0 6450 1.0 33.8444

Framework versions

  • Transformers 4.50.0.dev0
  • Pytorch 2.4.1
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
3
Safetensors
Model size
107M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dnq2025/mask2former-finetuned-ER-Mito-LD4

Finetuned
(6)
this model