afterjacob's picture
Update README.md
c279948 verified
|
raw
history blame contribute delete
6.77 kB
---
license: mit
language:
- en
tags:
- conversations
- tagging
- embeddings
- bittensor
- dialog
- social media
- podcast
pretty_name: 5,000 Podcast Conversations with Metadata and Embedding Dataset
size_categories:
- 1M<n<10M
---
## 🗂️ ReadyAI - 5,000 Podcast Conversations with Metadata and Embedding Dataset
ReadyAI, operating subnet 33 on the [Bittensor Network](https://bittensor.com/) is an open-source initiative focused on low-cost, resource-minimal pipelines for structuring raw data for AI applications.
This dataset is part of the ReadyAI Conversational Genome Project, leveraging the Bittensor decentralized network.
AI runs on structured data — and this dataset bridges the gap between raw conversation transcripts and structured, vectorized semantic tags.
You can find more about our subnet on GitHub [here](https://github.com/afterpartyai/bittensor-conversation-genome-project).
---
## Full Vectors Access
➡️ **Download the full 45 GB conversation tags embeddings** from [here](https://huggingface.co/datasets/ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset/tree/main/data)
For large-scale processing and fine-tuning.
---
## 📦 Dataset Versions
In addition to the full dataset, two smaller versions are available:
- **Small version**
- Located in the `small_dataset` folder.
- Contains 1,000 conversations with the same file structure as the full dataset.
- All filenames are prefixed with `small_`.
- **Medium version**
- Located in the `medium_dataset` folder.
- Contains 2,500 conversations
- Also using the same structure and `medium_` prefix for all files.
These subsets are ideal for lightweight experimentation, prototyping, or benchmarking.
---
## 📋 Dataset Overview
This dataset contains **annotated conversation transcripts** with:
- Human-readable semantic tags
- **Embedding vectors** contextualized to each conversation
- Participant metadata
It is ideal for:
- Semantic search over conversations
- AI assistant training (OpenAI models, fine-tuning)
- Vector search implementations using **pg_vector** and **Pinecone**
- Metadata analysis and tag retrieval for LLMs
The embeddings were generated with the [text-embedding-ada-002](https://huggingface.co/Xenova/text-embedding-ada-002) model and have 1536 dimensions per tag.
---
## 📂 Dataset Structure
The dataset consists of four main components:
### 1. **data/bittensor-conversational-tags-and-embeddings-part-*.parquet** — Tag Embeddings and Metadata
Each Parquet file contains rows with:
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Unique conversation group ID |
| tag_id | int64 | Unique identifier for the tag |
| tag | string | Semantic tag (e.g., "climate change") |
| vector | list of float32 | Embedding vector representing the tag's meaning **in the conversation's context** |
✅ Files split into ~1GB chunks for efficient loading and streaming.
---
### 2. **tag_to_id.parquet** — Tag Mapping
Mapping between tag IDs and human-readable tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| tag_id | int64 | Unique tag ID |
| tag | string | Semantic tag text |
✅ Useful for reverse-mapping tags from models or outputs.
---
### 3. **conversations_to_tags.parquet** — Conversation-to-Tag Mappings
Links conversations to their associated semantic tags.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| tag_ids | list of int64 | List of tag IDs relevant to the conversation |
✅ For supervised training, retrieval tasks, or semantic labeling.
---
### 4. **conversations_train.parquet** — Full Conversation Text and Participants
Contains the raw multi-turn dialogue and metadata.
| Column | Type | Description |
|:-------|:-----|:------------|
| c_guid | int64 | Conversation group ID |
| transcript | string | Full conversation text |
| participants | list of strings | List of speaker identifiers |
✅ Useful for dialogue modeling, multi-speaker AI, or fine-tuning.
---
## 🚀 How to Use
**Install dependencies**
```python
pip install pandas pyarrow datasets
```
**Download the dataset**
```python
import datasets
path = "ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset"
dataset = datasets.load_dataset(path)
print(dataset['train'].column_names)
```
**Load a single Parquet split**
```python
import pandas as pd
df = pd.read_parquet("data/bittensor-conversational-tags-and-embeddings-part-0000.parquet")
print(df.head())
```
**Load all tag splits**
```python
import pandas as pd
import glob
files = sorted(glob.glob("data/bittensor-conversational-tags-and-embeddings-part-*.parquet"))
df_tags = pd.concat((pd.read_parquet(f) for f in files), ignore_index=True)
print(f"Loaded {len(df_tags)} tag records.")
```
**Load tag dictionary**
```python
tag_dict = pd.read_parquet("tag_to_id.parquet")
print(tag_dict.head())
```
**Load conversation to tags mapping**
```python
df_mapping = pd.read_parquet("conversations_to_tags.parquet")
print(df_mapping.head())
```
**Load full conversations dialog and metadata**
```python
df_conversations = pd.read_parquet("conversations_train.parquet")
print(df_conversations.head())
```
---
## 🔥 Example: Reconstruct Tags for a Conversation
```python
# Build tag lookup
tag_lookup = dict(zip(tag_dict['tag_id'], tag_dict['tag']))
# Pick a conversation
sample = df_mapping.iloc[0]
c_guid = sample['c_guid']
tag_ids = sample['tag_ids']
# Translate tag IDs to human-readable tags
tags = [tag_lookup.get(tid, "Unknown") for tid in tag_ids]
print(f"Conversation {c_guid} has tags: {tags}")
```
---
## 📦 Handling Split Files
| Situation | Strategy |
|:----------|:---------|
| Enough RAM | Use `pd.concat()` to merge splits |
| Low memory | Process each split one-by-one |
| Hugging Face datasets | Use streaming mode |
**Example (streaming with Hugging Face `datasets`)**
```python
from datasets import load_dataset
dataset = load_dataset(
"ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset",
split="train",
streaming=True
)
for example in dataset:
print(example)
break
```
---
## 📜 License
MIT License
✅ Free to use and modify
---
## ✨ Credits
Built using contributions from Bittensor conversational miners and the ReadyAI open-source community.
---
## 🎯 Summary
| Component | Description |
|:----------|:------------|
| parquets/part_*.parquet | Semantic tags and their contextual embeddings |
| tag_to_id.parquet | Dictionary mapping of tag IDs to text |
| conversations_to_tags.parquet | Links conversations to tags |
| conversations_train.parquet | Full multi-turn dialogue with participant metadata |