beit-ena24

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the ena24 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9751
  • Accuracy: 0.6809

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.8853 0.1302 100 2.8477 0.2176
2.7962 0.2604 200 2.5644 0.1985
2.4273 0.3906 300 2.3036 0.2885
2.1587 0.5208 400 2.1759 0.3305
2.1721 0.6510 500 2.0160 0.3405
2.0539 0.7812 600 1.8444 0.4084
1.7687 0.9115 700 1.7824 0.4069
1.7545 1.0417 800 1.6203 0.5092
1.5865 1.1719 900 1.5315 0.5176
1.3489 1.3021 1000 1.6056 0.5084
1.2064 1.4323 1100 1.2743 0.5878
1.1963 1.5625 1200 1.1703 0.6336
1.0333 1.6927 1300 1.1410 0.6412
1.1828 1.8229 1400 1.0684 0.6473
0.6996 1.9531 1500 0.9751 0.6809

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
17
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mbiarreta/beit-ena24

Finetuned
(297)
this model