wav2vec2-large-xlsr-coraa-words-exp-2

This model is a fine-tuned version of Edresson/wav2vec2-large-xlsr-coraa-portuguese on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5347
  • Wer: 0.3764
  • Cer: 0.1666
  • Per: 0.3689

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer Per
37.5783 0.98 21 12.5136 1.0 0.9606 1.0
37.5783 2.0 43 4.6324 1.0 0.9606 1.0
37.5783 2.98 64 3.9199 1.0 0.9606 1.0
37.5783 4.0 86 3.6333 1.0 0.9606 1.0
10.8343 4.98 107 3.3986 1.0 0.9606 1.0
10.8343 6.0 129 3.2632 1.0 0.9606 1.0
10.8343 6.98 150 3.1944 1.0 0.9606 1.0
10.8343 8.0 172 3.1359 1.0 0.9606 1.0
10.8343 8.98 193 3.1062 1.0 0.9606 1.0
3.0686 10.0 215 3.0914 1.0 0.9606 1.0
3.0686 10.98 236 3.0769 1.0 0.9606 1.0
3.0686 12.0 258 3.0595 1.0 0.9606 1.0
3.0686 12.98 279 3.0485 1.0 0.9606 1.0
2.9771 14.0 301 3.0664 1.0 0.9606 1.0
2.9771 14.98 322 3.0457 1.0 0.9606 1.0
2.9771 16.0 344 3.0246 1.0 0.9606 1.0
2.9771 16.98 365 3.0020 1.0 0.9606 1.0
2.9771 18.0 387 2.9227 1.0 0.9606 1.0
2.9246 18.98 408 2.8106 1.0 0.9606 1.0
2.9246 20.0 430 2.6933 1.0 0.9172 1.0
2.9246 20.98 451 2.4785 1.0 0.8042 1.0
2.9246 22.0 473 2.1101 1.0 0.6433 1.0
2.9246 22.98 494 1.6137 1.0 0.5067 1.0
2.3792 24.0 516 1.1653 0.9991 0.3579 0.9991
2.3792 24.98 537 0.9528 0.9991 0.3145 0.9991
2.3792 26.0 559 0.8157 0.9991 0.3004 0.9991
2.3792 26.98 580 0.7607 0.9962 0.2891 0.9962
0.8858 28.0 602 0.7309 0.9783 0.2836 0.9783
0.8858 28.98 623 0.6736 0.6830 0.2156 0.6811
0.8858 30.0 645 0.6378 0.4274 0.1760 0.4198
0.8858 30.98 666 0.5986 0.4217 0.1756 0.4151
0.8858 32.0 688 0.6263 0.3934 0.1721 0.3840
0.4514 32.98 709 0.6369 0.4189 0.1777 0.4123
0.4514 34.0 731 0.5785 0.3887 0.1713 0.3802
0.4514 34.98 752 0.5779 0.3925 0.1713 0.3840
0.4514 36.0 774 0.5766 0.3792 0.1682 0.3717
0.4514 36.98 795 0.5598 0.3755 0.1654 0.3670
0.3178 38.0 817 0.5604 0.3698 0.1662 0.3632
0.3178 38.98 838 0.5347 0.3764 0.1666 0.3689
0.3178 40.0 860 0.5741 0.3726 0.1683 0.3642
0.3178 40.98 881 0.5516 0.3736 0.1685 0.3660
0.2722 42.0 903 0.5712 0.3679 0.1669 0.3623
0.2722 42.98 924 0.5592 0.3755 0.1666 0.3670
0.2722 44.0 946 0.5517 0.3679 0.1648 0.3613
0.2722 44.98 967 0.5622 0.3566 0.1627 0.35
0.2722 46.0 989 0.5663 0.3651 0.1659 0.3575
0.2191 46.98 1010 0.5639 0.3679 0.1652 0.3623
0.2191 48.0 1032 0.5749 0.3538 0.1640 0.3453
0.2191 48.98 1053 0.5537 0.3642 0.1633 0.3566
0.2191 50.0 1075 0.5536 0.3736 0.1662 0.3679
0.2191 50.98 1096 0.5671 0.3774 0.1669 0.3717
0.1935 52.0 1118 0.5730 0.3670 0.1661 0.3594
0.1935 52.98 1139 0.5711 0.3717 0.1659 0.3632
0.1935 54.0 1161 0.5680 0.3623 0.1634 0.3528
0.1935 54.98 1182 0.5692 0.3575 0.1630 0.35
0.1755 56.0 1204 0.6076 0.3651 0.1675 0.3585
0.1755 56.98 1225 0.5917 0.3651 0.1655 0.3575
0.1755 58.0 1247 0.5720 0.3566 0.1636 0.3519
0.1755 58.98 1268 0.5759 0.3651 0.1641 0.3594

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.13.3
Downloads last month
105
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support