Kimi-Audio-GenTest / README.md
bigmoyan's picture
Upload folder using huggingface_hub
cb479e1 verified
metadata
license:
  - mit
language:
  - zh
tags:
  - speech generation
  - chinese
pretty_name: Kimi-Audio-Generation-Testset

Kimi-Audio-Generation-Testset

Dataset Description

Summary: This dataset is designed to benchmark and evaluate the conversational capabilities of audio-based dialogue models. It consists of a collection of audio files containing various instructions and conversational prompts. The primary goal is to assess a model's ability to generate not just relevant, but also appropriately styled audio responses.

Specifically, the dataset targets the model's proficiency in:

  • Paralinguistic Control: Generating responses with specific control over emotion, speaking speed, and accent.
  • Empathetic Dialogue: Engaging in conversations that demonstrate understanding and empathy.
  • Style Adaptation: Delivering responses in distinct styles, including storytelling and reciting tongue twisters.

Audio conversation models are expected to process the input audio instructions and generate reasonable, contextually relevant audio responses. The quality, appropriateness, and adherence to the instructed characteristics (like emotion or style) of the generated responses are evaluated through human assessment.

  • Languages: zh (中文)

Dataset Structure

Data Instances

Each line in the test/metadata.jsonl file is a JSON object representing a data sample. The datasets library uses the path in the file_name field to load the corresponding audio file.

示例:

{"audio_content": "你能不能快速地背一遍李白的静夜思", "ability": "speed", "file_name": "wav/6.wav"}