wav2vec2-large-xlsr-coraa-words-exp-3

This model is a fine-tuned version of Edresson/wav2vec2-large-xlsr-coraa-portuguese on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5878
  • Wer: 0.3943
  • Cer: 0.1664
  • Per: 0.3858

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer Per
37.3293 0.98 21 11.1197 1.0 0.9606 1.0
37.3293 2.0 43 4.4296 1.0 0.9606 1.0
37.3293 2.98 64 3.8433 1.0 0.9606 1.0
37.3293 4.0 86 3.6004 1.0 0.9606 1.0
10.4132 4.98 107 3.4505 1.0 0.9606 1.0
10.4132 6.0 129 3.2447 1.0 0.9606 1.0
10.4132 6.98 150 3.1682 1.0 0.9606 1.0
10.4132 8.0 172 3.1254 1.0 0.9606 1.0
10.4132 8.98 193 3.1141 1.0 0.9606 1.0
3.0609 10.0 215 3.0941 1.0 0.9606 1.0
3.0609 10.98 236 3.0881 1.0 0.9606 1.0
3.0609 12.0 258 3.0595 1.0 0.9606 1.0
3.0609 12.98 279 3.0402 1.0 0.9606 1.0
2.972 14.0 301 3.0408 1.0 0.9606 1.0
2.972 14.98 322 3.0179 1.0 0.9606 1.0
2.972 16.0 344 2.9057 1.0 0.9606 1.0
2.972 16.98 365 2.7601 1.0 0.9606 1.0
2.972 18.0 387 2.6964 1.0 0.9384 1.0
2.7812 18.98 408 2.3877 1.0 0.7817 1.0
2.7812 20.0 430 1.8978 1.0 0.6250 1.0
2.7812 20.98 451 1.4096 1.0 0.4215 1.0
2.7812 22.0 473 1.0923 0.9991 0.3336 0.9991
2.7812 22.98 494 0.9002 0.9991 0.3079 0.9991
1.4929 24.0 516 0.8111 0.9991 0.2955 0.9991
1.4929 24.98 537 0.7687 0.9953 0.2888 0.9953
1.4929 26.0 559 0.7479 0.9811 0.2749 0.9811
1.4929 26.98 580 0.7179 0.9698 0.2690 0.9689
0.5434 28.0 602 0.6987 0.8028 0.2307 0.8009
0.5434 28.98 623 0.6984 0.4547 0.1811 0.4453
0.5434 30.0 645 0.6479 0.4283 0.1760 0.4208
0.5434 30.98 666 0.6504 0.4387 0.1782 0.4302
0.5434 32.0 688 0.6665 0.4217 0.1741 0.4094
0.3255 32.98 709 0.6688 0.4292 0.1763 0.4208
0.3255 34.0 731 0.6182 0.4179 0.1725 0.4066
0.3255 34.98 752 0.6193 0.4066 0.1718 0.4
0.3255 36.0 774 0.6320 0.4085 0.1716 0.3981
0.3255 36.98 795 0.6316 0.4075 0.1718 0.4009
0.231 38.0 817 0.6636 0.3991 0.1713 0.3896
0.231 38.98 838 0.6038 0.3953 0.1679 0.3887
0.231 40.0 860 0.6158 0.4 0.1680 0.3915
0.231 40.98 881 0.6122 0.3896 0.1672 0.3821
0.2015 42.0 903 0.6088 0.3877 0.1682 0.3830
0.2015 42.98 924 0.5878 0.3943 0.1664 0.3858
0.2015 44.0 946 0.6330 0.3811 0.1662 0.3764
0.2015 44.98 967 0.6317 0.3887 0.1664 0.3821
0.2015 46.0 989 0.6155 0.3906 0.1671 0.3858
0.1617 46.98 1010 0.6348 0.3811 0.1648 0.3774
0.1617 48.0 1032 0.6249 0.3840 0.1655 0.3774
0.1617 48.98 1053 0.6249 0.3915 0.1669 0.3868
0.1617 50.0 1075 0.6070 0.3877 0.1659 0.3811
0.1617 50.98 1096 0.6005 0.3849 0.1637 0.3764
0.1426 52.0 1118 0.6131 0.3868 0.1638 0.3802
0.1426 52.98 1139 0.6216 0.3925 0.1668 0.3868
0.1426 54.0 1161 0.6343 0.3953 0.1672 0.3849
0.1426 54.98 1182 0.6267 0.3896 0.1643 0.3821
0.1273 56.0 1204 0.6093 0.3849 0.1647 0.3792
0.1273 56.98 1225 0.6199 0.3906 0.1654 0.3821
0.1273 58.0 1247 0.5931 0.3915 0.1659 0.3830
0.1273 58.98 1268 0.6227 0.3877 0.1662 0.3802
0.1273 60.0 1290 0.6208 0.3783 0.1652 0.3698
0.1106 60.98 1311 0.6312 0.3802 0.1647 0.3745
0.1106 62.0 1333 0.6203 0.3830 0.1671 0.3764
0.1106 62.98 1354 0.6219 0.3915 0.1673 0.3830

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.13.3
Downloads last month
74
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support