task_categories:
- feature-extraction
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
tags:
- co-speech gestures
- gesture-spotting
- video-understanding
pretty_name: AVS-Spot
size_categories:
- n<1K
source_datasets:
- extended
Dataset Card for AVS-Spot Benchmark
This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
π ArXiv:
π Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
We present JEGAL, a Joint Embedding space for Gestures, Audio and Language. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
π Table of Contents
- Dataset Card for AVS-Spot Benchmark?
- π Table of Contents
- π What is the AVS-Spot Benchmark
- π Dataset Structure
- π¦ Dataset curation
- π Citation
- π Acknowledgements
π What is the AVS-Spot Benchmark?
Summary
AVS-Spot is a benchmark for evaluating the task of gestured word-spotting. It contains 500 videos, sampled from the AVSpeech official test dataset. Each video contains at least one clearly gestured word, annotated as the "target word". Additionally, we provide other annotations, including the text phrase, word boundaries, and speech-stress labels for each sample.
Task: Given a target word, an input gesture video with a transcript/speech, the goal is to localize the occurrence of the target word in the video based on gestures.
Some examples from the dataset are shown below. Note: the green highlight box in the video is for visualization purposes only. The actual dataset does not contain these boxes; instead, we provide the target word's start and end frames as part of the annotations.
Download instructions
Run the following scripts to download and pre-process the dataset:
from datasets import load_dataset
# Load the dataset csv file with annotations
dataset = load_dataset("sindhuhegde/avs-spot")
# Download the videos from YouTube-ids and timestamps
python download_videos.py --video_root=<dataset-path>
# Obtain the crops with the target speaker
python preprocess_videos.py --data_root=<dataset-path> --preprocessed_root=<path-to-save-the-preprocessed-data> --merge_dir=<path-to-save-audio-video-merged-results> --temp_dir=<path-to-save-intermediate-results> --metadata_root=<path-to-save-the-metadata>
Once the dataset is downloaded and pre-processed, the structure of the folders will be as follows:
video_root (path of the downloaded videos)
βββ *.mp4 (videos)
preprocessed_root (path of the pre-processed videos)
βββ list of video-ids
β βββ *.avi (extracted person track video for each sample)
| βββ *.wav (extracted person track audio for each sample)
merge_dir (path of the merged videos)
βββ *.mp4 (target-speaker videos with audio)
π Dataset Structure
Data Fields
video_id
: YouTube video IDstart_time
: Start time (in seconds)end_time
: End time (in seconds)filename
: Filename along with the target-speaker crop number (obtained after pre-processing)num_frames
: Number of frames in the video after pre-processingphrase
: Text trasncript of the videotarget_word
: Target word (the word to be spotted)target_word_boundary
: Word boundary of the target word. Format: [target-word, start_frame, end_frame]word_boundaries
: Word boundaries for all the words in the video. Format: [[word-1, start_frame, end_frame], [word-2, start_frame, end_frame], ..., [word-n, start_frame, end_frame]]stress_label
: Binary label indicating whether the target-word has been stressed in the corresponding speech
Data Instances
Each instance in the dataset contains the above fields. An example instance is shown below.
{
"video_id": "jnsuH9_qYyA",
"start_time": 26.562700,
"end_time": 29.802700,
"filename": "jnsuH9_qYyA_26.562700-29.802700/00000",
"num_frames": 83,
"phrase": "app is beautiful it just is streamlined it",
"target_word": "beautiful",
"target_word_boundary": "['beautiful', 21, 37]",
"word_boundaries": "[['app', 0, 11], ['is', 12, 13], ['beautiful', 21, 37], ['it', 45, 47], ['just', 48, 53], ['is', 60, 63], ['streamlined', 65, 81], ['it', 82, 83]]",
"stress_label": 1
}
See the AVS-Spot dataset viewer to explore more examples.
π¦ Dataset Curation
AVS-Spot is a dataset of video clips where a specific word is distinctly gestured. We begin with the full English test set from the AVSpeech dataset and extract word-aligned transcripts using the WhisperX ASR model. Short phrases containing $4$ to $12$ words are then selected, ensuring that the clips exhibit distinct gesture movements. We then manually review and annotate clips with a target-word
, where the word is visibly gestured. This process results in $500$ curated clips, each containing a well-defined gestured word. The manual annotation ensures minimal label noise, enabling a reliable evaluation of the gesture spotting task. Additionally, we provide binary stress/emphasis
labels for target words, capturing key gesture-related cues.
Summarized dataset information is given below:
- Source: AVSpeech
- Language: English
- Modalities: Video, audio, text
- Labels: Target-word, word-boundaries, speech-stress binary label
- Task: Gestured word spotting
Statistics
Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
---|---|---|---|---|---|
AVS-Spot | test | 0.38 | 391 | 2.73 | 500 |
Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
π Citation
If you find this dataset helpful, please consider starring β the repository and citing our work.
@article{Hegde_ArXiv_2025,
title={Understanding Co-speech Gestures in-the-wild},
author={Hegde, Sindhu and Prajwal, K R, Kwon, Taein and Zisserman, Andrew},
booktitle={arXiv},
year={2025}
}
π Acknowledgements
The authors would like to thank Piyush Bagad, Ragav Sachdeva, and Jaesung Hugh for their valuable discussions. They also extend their thanks to David Pinto for setting up the data annotation tool and to Ashish Thandavan for his support with the infrastructure. This research is funded by EPSRC Programme Grant VisualAI EP/T028572/1, and a Royal Society Research Professorship RP \textbackslash R1 \textbackslash 191132.