Add Github repository

#4
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -1,23 +1,22 @@
1
  ---
2
- task_categories:
3
- - feature-extraction
4
- - text-to-video
5
  annotations_creators:
6
  - expert-generated
7
  language:
8
  - en
 
 
 
 
 
 
 
 
9
  tags:
10
  - co-speech gestures
11
  - gesture-spotting
12
  - video-understanding
13
  - multimodal-learning
14
- pretty_name: AVS-Spot
15
- size_categories:
16
- - n<1K
17
- source_datasets:
18
- - extended
19
  ---
20
-
21
 
22
  # Dataset Card for AVS-Spot Benchmark
23
 
@@ -26,6 +25,7 @@ This dataset is associated with the paper: "Understanding Co-Speech Gestures in-
26
 
27
  - πŸ“ ArXiv: https://arxiv.org/abs/2503.22668
28
  - 🌐 Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
 
29
 
30
  <p align="center">
31
  <img src="assets/teaser.gif", width="450"/>
@@ -191,5 +191,4 @@ If you find this dataset helpful, please consider starring ⭐ the repository an
191
 
192
  ## πŸ™ Acknowledgements
193
 
194
- The authors would like to thank Piyush Bagad, Ragav Sachdeva, and Jaesung Hugh for their valuable discussions. They also extend their thanks to David Pinto for setting up the data annotation tool and to Ashish Thandavan for his support with the infrastructure. This research is funded by EPSRC Programme Grant VisualAI EP/T028572/1, and a Royal Society Research Professorship RP \textbackslash R1 \textbackslash 191132.
195
-
 
1
  ---
 
 
 
2
  annotations_creators:
3
  - expert-generated
4
  language:
5
  - en
6
+ size_categories:
7
+ - n<1K
8
+ source_datasets:
9
+ - extended
10
+ task_categories:
11
+ - feature-extraction
12
+ - text-to-video
13
+ pretty_name: AVS-Spot
14
  tags:
15
  - co-speech gestures
16
  - gesture-spotting
17
  - video-understanding
18
  - multimodal-learning
 
 
 
 
 
19
  ---
 
20
 
21
  # Dataset Card for AVS-Spot Benchmark
22
 
 
25
 
26
  - πŸ“ ArXiv: https://arxiv.org/abs/2503.22668
27
  - 🌐 Project page: https://www.robots.ox.ac.uk/~vgg/research/jegal
28
+ - πŸ’» Code: https://github.com/Sindhu-Hegde/jegal
29
 
30
  <p align="center">
31
  <img src="assets/teaser.gif", width="450"/>
 
191
 
192
  ## πŸ™ Acknowledgements
193
 
194
+ The authors would like to thank Piyush Bagad, Ragav Sachdeva, and Jaesung Hugh for their valuable discussions. They also extend their thanks to David Pinto for setting up the data annotation tool and to Ashish Thandavan for his support with the infrastructure. This research is funded by EPSRC Programme Grant VisualAI EP/T028572/1, and a Royal Society Research Professorship RP \textbackslash R1 \textbackslash 191132.