VideoRoPE / README.md
nielsr's picture
nielsr HF Staff
Add link to paper and task category
730bef5 verified
|
raw
history blame
1.09 kB
metadata
license: apache-2.0
task_categories:
  - video-text-to-text

V-NIAH-D Benchmark

A Visual Needle-In-A-Haystack Benchmark with Periodic Distractors. It was presented in VideoRoPE: What Makes for Good Video Rotary Position Embedding?.

One can use it by following steps similar to V-NIAH.

VideoRoPE Training Data

To facilitate the reproduction of our experimental results, we have also uploaded the data used by VideoRoPE. We use a subset of the LLaVA-Video-178K dataset to train VideoRoPE.

The LLaVA-Video-178K dataset consists of 178K videos and approximately 5 million question-answer (QA) pairs from diverse sources such as HD-VILA, Kinetics, and ActivityNet. To balance training efficiency and long-video comprehension, we randomly select 136K videos with durations under 2 minutes and 18K videos with durations between 2 and 3 minutes. This process resulted in our training set containing approximately 1.3 million pairs.