The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: CastError Message: Couldn't cast subject_id: string clustering_desc: string id: int64 start idx: int64 end idx: int64 length: int64 cluster_id: int64 correlation to model: string correlation achieved: string correlation achieved with tolerance: string MAE: double relaxed MAE: double -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1558 to {'subject_id': Value(dtype='string', id=None), 'cluster_desc': Value(dtype='string', id=None), 'id': Value(dtype='int32', id=None), 'start idx': Value(dtype='int32', id=None), 'end idx': Value(dtype='int32', id=None), 'length': Value(dtype='int32', id=None), 'cluster_id': Value(dtype='int32', id=None), 'correlation to model': Value(dtype='string', id=None), 'correlation achieved': Value(dtype='string', id=None), 'correlation achieved with tolerance': Value(dtype='string', id=None), 'MAE': Value(dtype='float32', id=None), 'relaxed MAE': Value(dtype='float32', id=None)} because column names don't match Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2266, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1879, in _iter_arrow for key, pa_table in self.ex_iterable._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast subject_id: string clustering_desc: string id: int64 start idx: int64 end idx: int64 length: int64 cluster_id: int64 correlation to model: string correlation achieved: string correlation achieved with tolerance: string MAE: double relaxed MAE: double -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1558 to {'subject_id': Value(dtype='string', id=None), 'cluster_desc': Value(dtype='string', id=None), 'id': Value(dtype='int32', id=None), 'start idx': Value(dtype='int32', id=None), 'end idx': Value(dtype='int32', id=None), 'length': Value(dtype='int32', id=None), 'cluster_id': Value(dtype='int32', id=None), 'correlation to model': Value(dtype='string', id=None), 'correlation achieved': Value(dtype='string', id=None), 'correlation achieved with tolerance': Value(dtype='string', id=None), 'MAE': Value(dtype='float32', id=None), 'relaxed MAE': Value(dtype='float32', id=None)} because column names don't match
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CSTS - Correlation Structures in Time Series
Important Notice
This dataset is published as a pre-publication release. An accompanying research paper is forthcoming on arXiv. All usage of this dataset must include proper attribution to the original authors as specified below.
Dataset Description
CSTS (Correlation Structures in Time Series) is a comprehensive synthetic benchmarking dataset for evaluating correlation structure discovery in time series data. The dataset systematically models known correlation structures between three different time series variates and enables examination of how these structures are affected by distribution shifting, sparsification, and downsampling. With its controlled properties and ground truth labels, CSTS provides algorithm developers clean benchmark data that bridge the gap between theoretical models and messy real-world data.
Key Applications
- Evaluating the ability of time series clustering algorithms to segment and group segments by correlation structures
- Assessing clustering validation methods for correlation-based clusters
- Investigating how data preprocessing affects correlation structure discovery
- Establishing performance thresholds for high-quality clustering result
Dataset Structure
CSTS provides two main splits (exploratory and confirmatory) with 30 subjects each, enabling proper statistical validation. The dataset structure includes:
- 12 data variants: 4 generation stages × 3 completeness levels for each split
- Generation stages: raw (unstructured data), correlated (normal-distributed data), nonnormal (extreme value and negative binomial distribution shifts), downsampled (1s→1min)
- Completeness levels: complete (100% of observations), partial (70% of observations), sparse (10% of observations)
Subjects
Each subject contains 100 segments of varying lengths (900-36000) and each segment encodes one of the 23 specific correlation structures. Each subject uses all 23 patterns 4-5 times. For the complete data variants each subject consists of ~1.26 mio observations.
Subjects each have the following information, accessible as subsets:
- a time series data file with three variates (iob, cob, ig) and time stamps (datetime)
- a label file specifying the ground truth segmentation and clustering
- 67 bad clustering label files with controlled degradations (varying numbers of segmentations and/or cluster assignment mistakes) spanning the entire Jaccard Index range [0,1]
Additional Splits
CSTS also includes versions (configured as splits) that allow exploring how cluster and segment counts affect algorithm performance. They follow the same dataset structure (exploratory and confirmatory splits with each 12 data variants subset with each 3 information file subsets). These versions are:
- Reduced cluster count: 11 or 6 instead of 23
- Reduced segment count: 50 or 25 instead of 100
Our accompanying paper provides complete methodological details, baseline findings, and usage guidance. Our GitHub codebase includes the generation, validation and use case code and is configured to automatically load the data.
Usage Guidance
Configuration Concept
The configuration follows the convention: <generation_stage>_<completeness_level>_<file_type>
and allows
access to a specific subset of the data.
Possible values are:
Generation Stages:
- raw: raw data, segmented but not correlated
- correlated: correlated data according to a specific correlation strtucture, normal distributed
- nonnormal: distribution shifted, correlated data
- downsampled: resampled non-normal data from 1s to 1min
Completeness Levels:
- complete: 100% of the data
- partial: 70% of the data (30% of observations dropped at random)
- sparse: 10% of the data (90% of observations dropped at random)
File Type
- data: loads the times series data file (needed for training algorithms)
- labels: loads the labels file for the ground truth (perfect) segmentation and clustering (needed for validating the results)
- badclustering_labels: loads the labels file for a degraded clustering with controlled segmentation and/or cluster assignment mistakes
Splits
The main splits are:
- exploratory: for experimentation and training
- confirmatory: for testing and validation Consider that depending on the application and study design, a single subject might be sufficient for training.
Additional splits are:
- reduced_11_clusters(_exploratory or _confirmatory): same data including 11 of the original 23 clusters (selected at random)
- reduced_6_clusters(_exploratory or _confirmatory): same data including t of the original 23 clusters (selected at random)
- reduced_50_segments(_exploratory or _confirmatory): same data including 50 of the original 100 segments (selected at random)
- reduced_25_segments(_exploratory or _confirmatory): same data including 25 of the original 100 segments (selected at random)
Quick Start
Example 1 - complete and correlated data variant
- Load the data for all 30 exploratory subjects for the complete and correlated data variant into pandas df:
import pandas as pd
from datasets import load_dataset
correlated_data = load_dataset("idegen/csts", name="correlated_complete_data", split="exploratory")
df_correlated = correlated_data.to_pandas()
df_correlated.head()
- Load the ground truth labels for these subjects
import pandas as pd
from datasets import load_dataset
correlated_labels = load_dataset("idegen/csts", name="correlated_complete_labels", split="exploratory")
df_correlated_labels = correlated_labels.to_pandas()
df_correlated_labels.head()
... more examples coming soon
Authors
- Isabella Degen, University of Bristol
- Zahraa S Abdallah, University of Bristol
- Henry W J Reeve, University of Nanjing
- Kate Robson Brown, University College Dublin
Pre-Publication Release Details
- Release Date: 29 Apr 2024
- Version: 1.0-pre
- Status: Pre-publication release
- Paper Status: Forthcoming on arXiv (expected publication: May 2025)
Citation
Please use the following temporary citation until our paper is published:
# BibTeX citation format - update when paper is published
@misc{csts2025,
author = {Degen, I and # First author
Abdallah, Z S and # Second author
Reeve, H W J, # Third author
Robson Brown, K}, # Third author
title = {CSTS: Evaluating Correlation Structures in Time Series}},
year = {2025},
publisher = {Hugging Face},
howpublished = {Pre-publication dataset release},
url = {https://huggingface.co/datasets/[your-username]/[dataset-name]}
note = {ArXiv preprint forthcoming} # Uncomment when preprint is available
}
Once our paper is published on arXiv, we will update this README with the proper citation information. Please check back for updates.
Acknowledgements
... coming soon
- Downloads last month
- 104