Update Model Card
Browse files
README.md
CHANGED
@@ -119,7 +119,7 @@ The NeMo toolkit was used for finetuning this model for **16,296 steps** over `p
|
|
119 |
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
120 |
|
121 |
## Dataset
|
122 |
-
This model was fine-tuned on the [bam-asr-
|
123 |
|
124 |
## Performance
|
125 |
|
|
|
119 |
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
120 |
|
121 |
## Dataset
|
122 |
+
This model was fine-tuned on the [bam-asr-early](https://huggingface.co/datasets/RobotsMali/bam-asr-early) dataset, which consists of 37 hours of transcribed Bambara speech data. The dataset is primarily derived from **Jeli-ASR dataset** (~87%).
|
123 |
|
124 |
## Performance
|
125 |
|