windowz_test

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Model Preparation Time: 0.001
  • Accuracy: 0.9678
  • F1: 0.9630
  • Iou: 0.9377
  • Loss: 0.1675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 1

Training results

Training Loss Epoch Step Model Preparation Time Validation Loss
1.0939 0.0501 257 0.001 0.5935 1.0369
1.0201 0.1003 514 0.001 0.6796 0.9606
0.9555 0.1504 771 0.001 0.7692 0.8134
0.8988 0.2005 1028 0.001 0.8883 0.4634
0.8663 0.2507 1285 0.001 0.9029 0.3463
0.8516 0.3008 1542 0.001 0.8728 0.3075
0.7798 0.3510 1799 0.001 0.9528 0.7747
0.7601 0.4011 2056 0.001 0.8082 0.5655
0.7723 0.4512 2313 0.001 0.9550 0.3013
0.7258 0.5014 2570 0.001 0.9673 0.1914
0.7085 0.5515 2827 0.001 0.9377 0.1675
0.7058 0.6016 3084 0.001 0.9406 0.2294
0.7008 0.6518 3341 0.001 0.9189 0.2342
0.6691 0.7019 3598 0.001 0.9404 0.2161

Framework versions

  • Transformers 4.45.0
  • Pytorch 2.5.1+cu124
  • Datasets 2.21.0
  • Tokenizers 0.20.3
Downloads last month
12
Safetensors
Model size
544k params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support