Kimi-Audio-GenTest / README.md
bigmoyan's picture
Upload folder using huggingface_hub
cb479e1 verified
|
raw
history blame
1.92 kB
---
# Required: Specify the license for your dataset
license: [mit]
# Required: Specify the language(s) of the dataset
language:
- zh # 中文
# Optional: Add tags for discoverability
tags:
- speech generation
- chinese
# Required: A pretty name for your dataset card
pretty_name: "Kimi-Audio-Generation-Testset"
---
# Kimi-Audio-Generation-Testset
## Dataset Description
**Summary:** This dataset is designed to benchmark and evaluate the conversational capabilities of audio-based dialogue models. It consists of a collection of audio files containing various instructions and conversational prompts. The primary goal is to assess a model's ability to generate not just relevant, but also *appropriately styled* audio responses.
Specifically, the dataset targets the model's proficiency in:
* **Paralinguistic Control:** Generating responses with specific control over **emotion**, speaking **speed**, and **accent**.
* **Empathetic Dialogue:** Engaging in conversations that demonstrate understanding and **empathy**.
* **Style Adaptation:** Delivering responses in distinct styles, including **storytelling** and reciting **tongue twisters**.
Audio conversation models are expected to process the input audio instructions and generate reasonable, contextually relevant audio responses. The quality, appropriateness, and adherence to the instructed characteristics (like emotion or style) of the generated responses are evaluated through **human assessment**.
* **Languages:** zh (中文)
## Dataset Structure
### Data Instances
Each line in the `test/metadata.jsonl` file is a JSON object representing a data sample. The `datasets` library uses the path in the `file_name` field to load the corresponding audio file.
**示例:**
```json
{"audio_content": "你能不能快速地背一遍李白的静夜思", "ability": "speed", "file_name": "wav/6.wav"}