Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
whisper-large-v3-sandi-train-dev-2
This model is a fine-tuned version of ntnu-smil/whisper-large-v3-sandi-train-dev-1 on the ntnu-smil/sandi2025-ds dataset. It achieves the following results on the evaluation set:
- Loss: 1.7843
- Wer: 51.5995
- Cer: 231.3822
- Decode Runtime: 297.6495
- Wer Runtime: 0.1874
- Cer Runtime: 0.5015
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 1024
- optimizer: Use adamw_torch with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 28
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Decode Runtime | Wer Runtime | Cer Runtime |
---|---|---|---|---|---|---|---|---|
3.5982 | 1.1435 | 7 | 1.9369 | 55.0783 | 219.8890 | 292.1873 | 0.1938 | 0.4974 |
1.8793 | 2.2870 | 14 | 1.8568 | 53.1577 | 226.2867 | 299.6173 | 0.1898 | 0.5061 |
1.7879 | 3.4305 | 21 | 1.8041 | 51.8180 | 230.0110 | 300.7474 | 0.1845 | 0.4948 |
1.7769 | 4.5740 | 28 | 1.7843 | 51.5995 | 231.3822 | 297.6495 | 0.1874 | 0.5015 |
Framework versions
- PEFT 0.15.1
- Transformers 4.50.3
- Pytorch 2.6.0
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ntnu-smil/whisper-large-v3-sandi-train-dev-2
Base model
openai/whisper-large-v3