swin-large-mask2former-finetuned-ER-Mito-LD8
This model is a fine-tuned version of facebook/mask2former-swin-large-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:
- Mean Iou: 0.5775
- Loss: 34.1770
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 6450
Training results
Training Loss | Epoch | Step | Mean Iou | Validation Loss |
---|---|---|---|---|
50.2838 | 1.0 | 129 | 0.3320 | 40.2146 |
38.6838 | 2.0 | 258 | 0.4777 | 33.3521 |
35.7266 | 3.0 | 387 | 0.5179 | 30.3853 |
28.8023 | 4.0 | 516 | 0.5172 | 31.1722 |
27.499 | 5.0 | 645 | 0.5405 | 29.2993 |
26.443 | 6.0 | 774 | 0.6063 | 27.7216 |
24.4565 | 7.0 | 903 | 0.5585 | 27.0999 |
23.7098 | 8.0 | 1032 | 0.5561 | 27.6352 |
22.5123 | 9.0 | 1161 | 0.5488 | 26.2882 |
21.6224 | 10.0 | 1290 | 0.5360 | 28.5500 |
21.1611 | 11.0 | 1419 | 0.5107 | 26.7347 |
20.0678 | 12.0 | 1548 | 0.5705 | 25.9258 |
19.8926 | 13.0 | 1677 | 0.5776 | 26.3917 |
18.5645 | 14.0 | 1806 | 0.5857 | 25.5181 |
18.504 | 15.0 | 1935 | 0.5701 | 26.0597 |
17.6968 | 16.0 | 2064 | 0.6108 | 25.1246 |
17.555 | 17.0 | 2193 | 0.6117 | 25.6074 |
17.2567 | 18.0 | 2322 | 0.5683 | 27.1555 |
16.0851 | 19.0 | 2451 | 0.6045 | 27.6046 |
16.308 | 20.0 | 2580 | 0.5550 | 28.1746 |
15.7719 | 21.0 | 2709 | 0.5898 | 25.3221 |
15.0966 | 22.0 | 2838 | 0.6299 | 27.0200 |
15.2529 | 23.0 | 2967 | 0.5870 | 29.0526 |
15.2963 | 24.0 | 3096 | 0.5638 | 27.0797 |
14.5228 | 25.0 | 3225 | 0.6203 | 27.8585 |
14.2121 | 26.0 | 3354 | 0.5659 | 28.6089 |
13.909 | 27.0 | 3483 | 0.6042 | 28.4436 |
14.0334 | 28.0 | 3612 | 0.6067 | 29.2367 |
13.3485 | 29.0 | 3741 | 0.5800 | 28.8674 |
13.4275 | 30.0 | 3870 | 0.6036 | 28.3902 |
13.1812 | 31.0 | 3999 | 0.5837 | 30.3429 |
13.0124 | 32.0 | 4128 | 0.5837 | 28.6284 |
12.4116 | 33.0 | 4257 | 0.5851 | 30.4995 |
13.3998 | 34.0 | 4386 | 0.5749 | 31.7624 |
12.794 | 35.0 | 4515 | 0.5840 | 29.0796 |
12.2829 | 36.0 | 4644 | 0.5594 | 30.8203 |
12.204 | 37.0 | 4773 | 0.6036 | 28.8408 |
12.6922 | 38.0 | 4902 | 0.5848 | 30.4332 |
12.1068 | 39.0 | 5031 | 0.5988 | 29.9606 |
11.7072 | 40.0 | 5160 | 0.5681 | 31.9938 |
11.7888 | 41.0 | 5289 | 0.5834 | 30.7203 |
11.5609 | 42.0 | 5418 | 0.5843 | 30.2104 |
11.4152 | 43.0 | 5547 | 0.6122 | 31.5076 |
11.932 | 44.0 | 5676 | 0.6020 | 31.8253 |
11.3475 | 45.0 | 5805 | 0.5828 | 32.4827 |
10.6893 | 46.0 | 5934 | 0.5792 | 33.6502 |
11.7356 | 47.0 | 6063 | 0.5777 | 33.7372 |
10.8846 | 48.0 | 6192 | 0.5716 | 34.2067 |
11.5715 | 49.0 | 6321 | 0.5709 | 34.1338 |
11.0337 | 50.0 | 6450 | 0.5775 | 34.0800 |
Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.4.1
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Dnq2025/swin-large-mask2former-finetuned-ER-Mito-LD8
Base model
facebook/mask2former-swin-large-ade-semantic