Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
libritts-r-mimi / README.md
jkeisling's picture
Upload dataset
a7737c6 verified
metadata
license: cc-by-4.0
task_categories:
  - text-to-speech
language:
  - en
size_categories:
  - 10K<n<100K
dataset_info:
  features:
    - name: text_normalized
      dtype: string
    - name: text_original
      dtype: string
    - name: speaker_id
      dtype: string
    - name: path
      dtype: string
    - name: chapter_id
      dtype: string
    - name: id
      dtype: string
    - name: codes
      sequence:
        sequence: int64
  splits:
    - name: dev.clean
      num_bytes: 28485381
      num_examples: 5736
    - name: test.clean
      num_bytes: 27017042
      num_examples: 4837
    - name: train.clean.100
      num_bytes: 170451359
      num_examples: 33232
    - name: train.clean.360
      num_bytes: 605899762
      num_examples: 116426
  download_size: 157338197
  dataset_size: 831853544
configs:
  - config_name: default
    data_files:
      - split: dev.clean
        path: data/dev.clean-*
      - split: test.clean
        path: data/test.clean-*
      - split: train.clean.100
        path: data/train.clean.100-*
      - split: train.clean.360
        path: data/train.clean.360-*

LibriTTS-R Mimi encoding

This dataset converts all audio in the dev.clean, test.clean, train.100 and train.360 splits of the LibriTTS-R dataset from waveforms to tokens in Kyutai's Mimi neural codec. These tokens are intended as targets for DualAR audio models, but also allow you to simply download all audio in ~50-100x less space, if you're comfortable decoding later on with rustymimi or Transformers.

This does NOT contain the original audio, please use the regular LibriTTS-R for this. I am not actively maintaining this dataset as it's being used for a personal project; please do not expect any updates or assistance. Sorry!

If you want to decode audio to WAV, here's a snippet with HF Transformers:

from transformers import MimiModel, AutoFeatureExtractor
from datasets import load_dataset
# If using Jupyter
from IPython.display import Audio, display

device="cuda"
feature_extractor = AutoFeatureExtractor.from_pretrained("kyutai/mimi")
model = MimiModel.from_pretrained("kyutai/mimi")
model = model.to(device)

dataset = load_dataset("jkeisling/libritts-r-mimi")
dataset = dataset.with_format("torch")
codes = dataset["dev.clean"][0]["codes"].to(device)

# decode expects 3d (bsz, codebook_len=8, seqlen) tensor
out_pcm = model.decode(codes.unsqueeze(0))
audio_data = out_pcm.audio_values[0].detach().to("cpu").numpy()

# If using Jupyter
display(Audio(audio_data, rate=24000, autoplay=False))

Thanks to MythicInfinity, Koizumi et al. 2023 (LibriTTS-R cleaning and audio enhancement), Zen et al. 2019 (LibriTTS), and the original narrators of the corpus.

Original LibriTTS-R README below

LibriTTS-R [1] is a sound quality improved version of the LibriTTS corpus (http://www.openslr.org/60/) which is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, published in 2019.

Overview

This is the LibriTTS-R dataset, adapted for the datasets library.

Usage

Splits

There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements):

  • dev.clean
  • dev.other
  • test.clean
  • test.other
  • train.clean.100
  • train.clean.360
  • train.other.500

Configurations

There are 3 configurations, each which limits the splits the load_dataset() function will download.

The default configuration is "all".

  • "dev": only the "dev.clean" split (good for testing the dataset quickly)
  • "clean": contains only "clean" splits
  • "other": contains only "other" splits
  • "all": contains only "all" splits

Example

Loading the clean config with only the train.clean.360 split.

load_dataset("blabble-io/libritts_r", "clean", split="train.clean.100")

Streaming is also supported.

load_dataset("blabble-io/libritts_r", streaming=True)

Columns

{
    "audio": datasets.Audio(sampling_rate=24_000),
    "text_normalized": datasets.Value("string"),
    "text_original": datasets.Value("string"),
    "speaker_id": datasets.Value("string"),
    "path": datasets.Value("string"),
    "chapter_id": datasets.Value("string"),
    "id": datasets.Value("string"),
}

Example Row

{
  'audio': {
    'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 
    'array': ..., 
    'sampling_rate': 24000
  }, 
  'text_normalized': 'How quickly he disappeared!"',
  'text_original': 'How quickly he disappeared!"',
  'speaker_id': '3081', 
  'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 
  'chapter_id': '166546', 
  'id': '3081_166546_000028_000002'
}

Dataset Details

Dataset Description

  • License: CC BY 4.0

Dataset Sources [optional]

Citation

@ARTICLE{Koizumi2023-hs,
  title         = "{LibriTTS-R}: A restored multi-speaker text-to-speech corpus",
  author        = "Koizumi, Yuma and Zen, Heiga and Karita, Shigeki and Ding,
                   Yifan and Yatabe, Kohei and Morioka, Nobuyuki and Bacchiani,
                   Michiel and Zhang, Yu and Han, Wei and Bapna, Ankur",
  abstract      = "This paper introduces a new speech dataset called
                   ``LibriTTS-R'' designed for text-to-speech (TTS) use. It is
                   derived by applying speech restoration to the LibriTTS
                   corpus, which consists of 585 hours of speech data at 24 kHz
                   sampling rate from 2,456 speakers and the corresponding
                   texts. The constituent samples of LibriTTS-R are identical
                   to those of LibriTTS, with only the sound quality improved.
                   Experimental results show that the LibriTTS-R ground-truth
                   samples showed significantly improved sound quality compared
                   to those in LibriTTS. In addition, neural end-to-end TTS
                   trained with LibriTTS-R achieved speech naturalness on par
                   with that of the ground-truth samples. The corpus is freely
                   available for download from
                   \textbackslashurl\{http://www.openslr.org/141/\}.",
  month         =  may,
  year          =  2023,
  copyright     = "http://creativecommons.org/licenses/by-nc-nd/4.0/",
  archivePrefix = "arXiv",
  primaryClass  = "eess.AS",
  eprint        = "2305.18802"
}