afterjacob's picture
Update README.md
c279948 verified
metadata
license: mit
language:
  - en
tags:
  - conversations
  - tagging
  - embeddings
  - bittensor
  - dialog
  - social media
  - podcast
pretty_name: 5,000 Podcast Conversations with Metadata and Embedding Dataset
size_categories:
  - 1M<n<10M

πŸ—‚οΈ ReadyAI - 5,000 Podcast Conversations with Metadata and Embedding Dataset

ReadyAI, operating subnet 33 on the Bittensor Network is an open-source initiative focused on low-cost, resource-minimal pipelines for structuring raw data for AI applications. This dataset is part of the ReadyAI Conversational Genome Project, leveraging the Bittensor decentralized network.

AI runs on structured data β€” and this dataset bridges the gap between raw conversation transcripts and structured, vectorized semantic tags.

You can find more about our subnet on GitHub here.


Full Vectors Access

➑️ Download the full 45 GB conversation tags embeddings from here

For large-scale processing and fine-tuning.


πŸ“¦ Dataset Versions

In addition to the full dataset, two smaller versions are available:

  • Small version

    • Located in the small_dataset folder.
    • Contains 1,000 conversations with the same file structure as the full dataset.
    • All filenames are prefixed with small_.
  • Medium version

    • Located in the medium_dataset folder.
    • Contains 2,500 conversations
    • Also using the same structure and medium_ prefix for all files.

These subsets are ideal for lightweight experimentation, prototyping, or benchmarking.


πŸ“‹ Dataset Overview

This dataset contains annotated conversation transcripts with:

  • Human-readable semantic tags
  • Embedding vectors contextualized to each conversation
  • Participant metadata

It is ideal for:

  • Semantic search over conversations
  • AI assistant training (OpenAI models, fine-tuning)
  • Vector search implementations using pg_vector and Pinecone
  • Metadata analysis and tag retrieval for LLMs

The embeddings were generated with the text-embedding-ada-002 model and have 1536 dimensions per tag.

πŸ“‚ Dataset Structure

The dataset consists of four main components:

1. data/bittensor-conversational-tags-and-embeddings-part-*.parquet β€” Tag Embeddings and Metadata

Each Parquet file contains rows with:

Column Type Description
c_guid int64 Unique conversation group ID
tag_id int64 Unique identifier for the tag
tag string Semantic tag (e.g., "climate change")
vector list of float32 Embedding vector representing the tag's meaning in the conversation's context

βœ… Files split into ~1GB chunks for efficient loading and streaming.


2. tag_to_id.parquet β€” Tag Mapping

Mapping between tag IDs and human-readable tags.

Column Type Description
tag_id int64 Unique tag ID
tag string Semantic tag text

βœ… Useful for reverse-mapping tags from models or outputs.


3. conversations_to_tags.parquet β€” Conversation-to-Tag Mappings

Links conversations to their associated semantic tags.

Column Type Description
c_guid int64 Conversation group ID
tag_ids list of int64 List of tag IDs relevant to the conversation

βœ… For supervised training, retrieval tasks, or semantic labeling.


4. conversations_train.parquet β€” Full Conversation Text and Participants

Contains the raw multi-turn dialogue and metadata.

Column Type Description
c_guid int64 Conversation group ID
transcript string Full conversation text
participants list of strings List of speaker identifiers

βœ… Useful for dialogue modeling, multi-speaker AI, or fine-tuning.


πŸš€ How to Use

Install dependencies

pip install pandas pyarrow datasets

Download the dataset

import datasets

path = "ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset"
dataset = datasets.load_dataset(path)

print(dataset['train'].column_names)

Load a single Parquet split

import pandas as pd

df = pd.read_parquet("data/bittensor-conversational-tags-and-embeddings-part-0000.parquet")
print(df.head())

Load all tag splits

import pandas as pd
import glob

files = sorted(glob.glob("data/bittensor-conversational-tags-and-embeddings-part-*.parquet"))
df_tags = pd.concat((pd.read_parquet(f) for f in files), ignore_index=True)

print(f"Loaded {len(df_tags)} tag records.")

Load tag dictionary

tag_dict = pd.read_parquet("tag_to_id.parquet")
print(tag_dict.head())

Load conversation to tags mapping

df_mapping = pd.read_parquet("conversations_to_tags.parquet")
print(df_mapping.head())

Load full conversations dialog and metadata

df_conversations = pd.read_parquet("conversations_train.parquet")
print(df_conversations.head())

πŸ”₯ Example: Reconstruct Tags for a Conversation

# Build tag lookup
tag_lookup = dict(zip(tag_dict['tag_id'], tag_dict['tag']))

# Pick a conversation
sample = df_mapping.iloc[0]
c_guid = sample['c_guid']
tag_ids = sample['tag_ids']

# Translate tag IDs to human-readable tags
tags = [tag_lookup.get(tid, "Unknown") for tid in tag_ids]

print(f"Conversation {c_guid} has tags: {tags}")

πŸ“¦ Handling Split Files

Situation Strategy
Enough RAM Use pd.concat() to merge splits
Low memory Process each split one-by-one
Hugging Face datasets Use streaming mode

Example (streaming with Hugging Face datasets)

from datasets import load_dataset

dataset = load_dataset(
    "ReadyAi/5000-podcast-conversations-with-metadata-and-embedding-dataset",
    split="train",
    streaming=True
)

for example in dataset:
    print(example)
    break

πŸ“œ License

MIT License
βœ… Free to use and modify


✨ Credits

Built using contributions from Bittensor conversational miners and the ReadyAI open-source community.


🎯 Summary

Component Description
parquets/part_*.parquet Semantic tags and their contextual embeddings
tag_to_id.parquet Dictionary mapping of tag IDs to text
conversations_to_tags.parquet Links conversations to tags
conversations_train.parquet Full multi-turn dialogue with participant metadata