Model Card for whisper-large-v2-formosan-all
This model is a fine-tuned version of the Taiwanese indigenous openai/whisper-large-v2.
Note: we use indonesian as whisper language id
Training process
The training of the model was performed with the following hyperparameters
- Batch size: 2*4 (on 4 L40s GPU)
- Gradient accumulation steps: 64
- Total steps: 4146
- Learning rate: 5e-4
- Data augmentation: No
- Optimizer: schedule_free_radam
- LR scheduler type: constant
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for formospeech/whisper-large-v2-formosan-all
Base model
openai/whisper-large-v2