wav2vec2-large-xls-r-300m-finetuned-hindi-common-voice-9-0
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:
- Loss: 0.7392
- Wer: 1.0141
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.42184e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
9.2217 | 3.03 | 400 | 4.0314 | 1.0 |
3.2902 | 6.06 | 800 | 2.1356 | 1.0001 |
0.9858 | 9.09 | 1200 | 0.8566 | 1.0037 |
0.5131 | 12.12 | 1600 | 0.7481 | 1.0074 |
0.3781 | 15.15 | 2000 | 0.7437 | 1.008 |
0.2998 | 18.18 | 2400 | 0.7310 | 1.0162 |
0.2553 | 21.21 | 2800 | 0.7384 | 1.0159 |
0.2216 | 24.24 | 3200 | 0.7537 | 1.0100 |
0.2048 | 27.27 | 3600 | 0.7392 | 1.0141 |
Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support