prajwalkr14 commited on
Commit
ab8fe9c
Β·
verified Β·
1 Parent(s): 5cac312

fixed initial descriptions and the statistics table

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -19,15 +19,15 @@ source_datasets:
19
  ---
20
 
21
 
22
- # Dataset Card for AVS-Spot Dataset
23
 
24
 
25
 
26
- This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
27
 
28
  - πŸ“ ArXiv:
29
 
30
- - 🌐 Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
31
 
32
  <p align="center">
33
  <img src="assets/teaser.gif", width="450"/>
@@ -37,9 +37,9 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
37
 
38
  ## πŸ“‹ Table of Contents
39
 
40
- - [Dataset Card for AVS-Spot Dataset](#dataset-card-for-avs-spot-dataset)
41
  - [πŸ“‹ Table of Contents](#πŸ“‹-table-of-contents)
42
- - [πŸ“š Dataset Description](#πŸ“š-dataset-description)
43
  - [Summary](#summary)
44
  - [Download instructions](#download-instructions)
45
  - [πŸ“ Dataset Structure](#πŸ“-dataset-structure)
@@ -54,15 +54,15 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
54
 
55
 
56
 
57
- ## πŸ“š Dataset Description
58
 
59
  ### Summary
60
 
61
- AVS-Spot is a **gestured word-spotting** video dataset containing **500 videos**, sampled from the AVSpeech official test dataset. The dataset is curated such that each video contains at least one clearly gestured word, annotated as the "target word". Additionally, we provide other annotations, including the text phrase, word boundaries, and speech-stress labels for each sample.
62
 
63
- **Task:** Given an input video, a target word, and a transcript/speech, the goal is to localize the occurrence of the target word in the video based on gestures.
64
 
65
- Some examples from the dataset are shown below. Note that the green box that highlights the word when it is gestured in the video is for visualization purposes only. The actual dataset does not contain these boxes; instead, we provide the target word's start and end frames as part of the annotations.
66
 
67
  <p align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
68
  <img src="assets/data1.gif" width="110" height="120" style="margin-right: 10px;"/>
@@ -170,9 +170,8 @@ Summarized dataset information is given below:
170
  ### Statistics
171
 
172
  | Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
173
- |:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
174
- AVS-Spot | test | 0.38 | 391 | 2.73 | 500 |
175
-
176
 
177
  Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
178
 
 
19
  ---
20
 
21
 
22
+ # Dataset Card for AVS-Spot Benchmark
23
 
24
 
25
 
26
+ This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
27
 
28
  - πŸ“ ArXiv:
29
 
30
+ - 🌐 Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
31
 
32
  <p align="center">
33
  <img src="assets/teaser.gif", width="450"/>
 
37
 
38
  ## πŸ“‹ Table of Contents
39
 
40
+ - [Dataset Card for AVS-Spot Benchmark?](#dataset-card-for-avs-spot-benchmark)
41
  - [πŸ“‹ Table of Contents](#πŸ“‹-table-of-contents)
42
+ - [πŸ“š What is the AVS-Spot Benchmark](#πŸ“š-dataset-description)
43
  - [Summary](#summary)
44
  - [Download instructions](#download-instructions)
45
  - [πŸ“ Dataset Structure](#πŸ“-dataset-structure)
 
54
 
55
 
56
 
57
+ ## πŸ“š What is the AVS-Spot Benchmark?
58
 
59
  ### Summary
60
 
61
+ AVS-Spot is a benchmark for evaluating the task of **gestured word-spotting**. It contains **500 videos**, sampled from the AVSpeech official test dataset. Each video contains at least one clearly gestured word, annotated as the "target word". Additionally, we provide other annotations, including the text phrase, word boundaries, and speech-stress labels for each sample.
62
 
63
+ **Task:** Given a target word, an input gesture video with a transcript/speech, the goal is to localize the occurrence of the target word in the video based on gestures.
64
 
65
+ Some examples from the dataset are shown below. Note: the green highlight box in the video is for visualization purposes only. The actual dataset does not contain these boxes; instead, we provide the target word's start and end frames as part of the annotations.
66
 
67
  <p align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
68
  <img src="assets/data1.gif" width="110" height="120" style="margin-right: 10px;"/>
 
170
  ### Statistics
171
 
172
  | Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
173
+ |:--------:|:-----:|:-------:|:-----------:|:-----------------:|:--------:|
174
+ | AVS-Spot | test | 0.38 | 391 | 2.73 | 500 |
 
175
 
176
  Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
177