Whisper-Tiny-Java-v7

This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6090
  • Wer: 0.5104

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 50000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9973 8.0 1000 1.0428 0.6821
0.3139 16.0 2000 0.8733 0.6234
0.0988 24.0 3000 0.8925 0.6347
0.0499 32.0 4000 0.9389 0.6149
0.0349 40.0 5000 0.9879 0.6206
0.0246 48.0 6000 1.0245 0.4681
0.0164 56.0 7000 1.1032 0.4715
0.0118 64.0 8000 1.1389 0.5985
0.0094 72.0 9000 1.1457 0.5793
0.0086 80.0 10000 1.1854 0.5375
0.0067 88.0 11000 1.2372 0.4664
0.005 96.0 12000 1.2564 0.4992
0.0043 104.0 13000 1.2725 0.4478
0.0039 112.0 14000 1.3331 0.5308
0.0036 120.0 15000 1.3492 0.6036
0.0028 128.0 16000 1.3910 0.4551
0.0034 136.0 17000 1.4366 0.4907
0.0024 144.0 18000 1.3797 0.6126
0.0019 152.0 19000 1.4368 0.5229
0.0017 160.0 20000 1.4499 0.5980
0.0014 168.0 21000 1.4369 0.5940
0.0015 176.0 22000 1.4615 0.5308
0.001 184.0 23000 1.4453 0.4986
0.0009 192.0 24000 1.4906 0.5471
0.0007 200.0 25000 1.4574 0.4941
0.0011 208.0 26000 1.4995 0.4647
0.0007 216.0 27000 1.5195 0.5155
0.0011 224.0 28000 1.4928 0.5483
0.0011 232.0 29000 1.5243 0.5144
0.0007 240.0 30000 1.5805 0.4884
0.0005 248.0 31000 1.5294 0.5116
0.0005 256.0 32000 1.5940 0.4975
0.0003 264.0 33000 1.5760 0.5003
0.0004 272.0 34000 1.5940 0.4873
0.0003 280.0 35000 1.6010 0.4681
0.0004 288.0 36000 1.5837 0.4845
0.0006 296.0 37000 1.5839 0.4794
0.0002 304.0 38000 1.5652 0.4754
0.0003 312.0 39000 1.6083 0.4833
0.0002 320.0 40000 1.5750 0.5189
0.0004 328.0 41000 1.6199 0.5980
0.0001 336.0 42000 1.5783 0.5353
0.0001 344.0 43000 1.5898 0.5099
0.0005 352.0 44000 1.6005 0.5833
0.0002 360.0 45000 1.5903 0.4873
0.0002 368.0 46000 1.6196 0.5150
0.0001 376.0 47000 1.6212 0.5251
0.0002 384.0 48000 1.6180 0.5539
0.0001 392.0 49000 1.6104 0.4963
0.0001 400.0 50000 1.6090 0.5104

Framework versions

  • Transformers 4.50.0.dev0
  • Pytorch 2.7.0+cu128
  • Datasets 2.16.0
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
37.8M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for bagasshw/whisper-tiny-javanese-openslr-v7

Finetuned
(1493)
this model