--- license: cc0-1.0 --- # ManaTTS Persian: a recipe for creating TTS datasets for lower resource languages ![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Dataset-orange) Mana-TTS is a comprehensive and large-scale Persian Text-to-Speech (TTS) dataset, featuring over 100 hours of high-quality audio, specifically designed for speech synthesis and related tasks. The dataset has been carefully collected, processed, and annotated to ensure high-quality data for training TTS models. For details on data processing pipeline and statistics, please refer to the paper in the Citation secition. ## Acknowledgement The raw audio and text files have been collected from the archive of [Nasl-e-Mana](https://naslemana.com/) magazine devoted to the blind. We thank the Nasl-e-Mana magazine for their invaluable work and for being so generous with the published dataset license. We also extend our gratitude to the [Iran Blind Non-governmental Organization](https://ibngo.ir/) for their support and guidance regarding the need for open access initiatives in this domain. ### Data Columns Each Parquet file contains the following columns: - **file name** (`string`): The unique identifier of the audio file. - **transcript** (`string`): The ground-truth transcript corresponding to the audio. - **duration** (`float64`): Duration of the audio file in seconds. - **match quality** (`string`): Either "HIGH" for `CER < 0.05` or "MIDDLE" for `0.05 < CER < 0.2` between actual and hypothesis transcript. - **hypothesis** (`string`): The best transcript generated by ASR as hypothesis to find the matching ground-truth transcript. - **CER** (`float64`): The Character Error Rate (CER) of the ground-truth and hypothesis transcripts. - **search type** (`int64`): Either 1 indicating the GT transcripts is result of Interval Search or 2 if a result of Gapped Search (refer to paper for more details). - **ASRs** (`string`): The Automatic Speech Recognition (ASR) systems used in order to find a satisfying matching transcript. - **audio** (`sequence`): The actual audio data. - **samplerate** (`float64`): The sample rate of the audio. ## Usage To use the dataset, you can load it directly using the Hugging Face datasets library: ```python from datasets import load_dataset dataset = load_dataset("MahtaFetrat/Mana-TTS", split='train') ``` You can also download specific parts or the entire dataset: ```bash # Download a specific part wget https://huggingface.co/datasets/MahtaFetrat/Mana-TTS/resolve/main/dataset/dataset_part_01.parquet # Download the entire dataset git clone https://huggingface.co/datasets/MahtaFetrat/Mana-TTS ``` ## Citation If you use Mana-TTS in your research or projects, please cite the following paper: ```bash @article{fetrat2024manatts, title={ManaTTS Persian: a recipe for creating TTS datasets for lower resource languages}, author={Mahta Fetrat Qharabagh and Zahra Dehghanian and Hamid R. Rabiee}, journal={arXiv preprint arXiv:2409.07259}, year={2024}, } ``` ## License This dataset is available under the cc0-1.0. However, the dataset should not be utilized for replicating or imitating the speaker’s voice for malicious purposes or unethical activities, including voice cloning for malicious intent. ## Collaboration and Community Impact We encourage researchers, developers, and the broader community to utilize the resources provided in this project, particularly in the development of high-quality screen readers and other assistive technologies to support the Iranian blind community. By fostering open-source collaboration, we aim to drive innovation and improve accessibility for all.