fixed initial descriptions and the statistics table
#1
by
prajwalkr14
- opened
README.md
CHANGED
@@ -19,15 +19,15 @@ source_datasets:
|
|
19 |
---
|
20 |
|
21 |
|
22 |
-
# Dataset Card for AVS-Spot
|
23 |
|
24 |
|
25 |
|
26 |
-
This
|
27 |
|
28 |
- π ArXiv:
|
29 |
|
30 |
-
- π Project
|
31 |
|
32 |
<p align="center">
|
33 |
<img src="assets/teaser.gif", width="450"/>
|
@@ -37,9 +37,9 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
|
|
37 |
|
38 |
## π Table of Contents
|
39 |
|
40 |
-
- [Dataset Card for AVS-Spot
|
41 |
- [π Table of Contents](#π-table-of-contents)
|
42 |
-
- [π
|
43 |
- [Summary](#summary)
|
44 |
- [Download instructions](#download-instructions)
|
45 |
- [π Dataset Structure](#π-dataset-structure)
|
@@ -54,15 +54,15 @@ We present **JEGAL**, a **J**oint **E**mbedding space for **G**estures, **A**udi
|
|
54 |
|
55 |
|
56 |
|
57 |
-
## π
|
58 |
|
59 |
### Summary
|
60 |
|
61 |
-
AVS-Spot is a **gestured word-spotting
|
62 |
|
63 |
-
**Task:** Given
|
64 |
|
65 |
-
Some examples from the dataset are shown below. Note
|
66 |
|
67 |
<p align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
|
68 |
<img src="assets/data1.gif" width="110" height="120" style="margin-right: 10px;"/>
|
@@ -170,9 +170,8 @@ Summarized dataset information is given below:
|
|
170 |
### Statistics
|
171 |
|
172 |
| Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
|
173 |
-
|
174 |
-
AVS-Spot | test
|
175 |
-
|
176 |
|
177 |
Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
|
178 |
|
|
|
19 |
---
|
20 |
|
21 |
|
22 |
+
# Dataset Card for AVS-Spot Benchmark
|
23 |
|
24 |
|
25 |
|
26 |
+
This dataset is associated with the paper: "Understanding Co-Speech Gestures in-the-wild"
|
27 |
|
28 |
- π ArXiv:
|
29 |
|
30 |
+
- π Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
|
31 |
|
32 |
<p align="center">
|
33 |
<img src="assets/teaser.gif", width="450"/>
|
|
|
37 |
|
38 |
## π Table of Contents
|
39 |
|
40 |
+
- [Dataset Card for AVS-Spot Benchmark?](#dataset-card-for-avs-spot-benchmark)
|
41 |
- [π Table of Contents](#π-table-of-contents)
|
42 |
+
- [π What is the AVS-Spot Benchmark](#π-dataset-description)
|
43 |
- [Summary](#summary)
|
44 |
- [Download instructions](#download-instructions)
|
45 |
- [π Dataset Structure](#π-dataset-structure)
|
|
|
54 |
|
55 |
|
56 |
|
57 |
+
## π What is the AVS-Spot Benchmark?
|
58 |
|
59 |
### Summary
|
60 |
|
61 |
+
AVS-Spot is a benchmark for evaluating the task of **gestured word-spotting**. It contains **500 videos**, sampled from the AVSpeech official test dataset. Each video contains at least one clearly gestured word, annotated as the "target word". Additionally, we provide other annotations, including the text phrase, word boundaries, and speech-stress labels for each sample.
|
62 |
|
63 |
+
**Task:** Given a target word, an input gesture video with a transcript/speech, the goal is to localize the occurrence of the target word in the video based on gestures.
|
64 |
|
65 |
+
Some examples from the dataset are shown below. Note: the green highlight box in the video is for visualization purposes only. The actual dataset does not contain these boxes; instead, we provide the target word's start and end frames as part of the annotations.
|
66 |
|
67 |
<p align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
|
68 |
<img src="assets/data1.gif" width="110" height="120" style="margin-right: 10px;"/>
|
|
|
170 |
### Statistics
|
171 |
|
172 |
| Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
|
173 |
+
|:--------:|:-----:|:-------:|:-----------:|:-----------------:|:--------:|
|
174 |
+
| AVS-Spot | test | 0.38 | 391 | 2.73 | 500 |
|
|
|
175 |
|
176 |
Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
|
177 |
|