Datasets:

DOI:
License:
The Dataset Viewer has been disabled on this dataset.

📦 GS2E: Gaussian Splatting is an Effective DataGenerator for Event Stream Generation

Submission to NeurIPS 2025 D&B Track, Under Review.

Teaser of GS2E

🧾 Dataset Summary

GS2E (Gaussian Splatting for Event stream Extraction) is a synthetic multi-view event dataset designed to support high-fidelity 3D scene understanding, novel view synthesis, and event-based neural rendering. Unlike previous video-driven or graphics-only event datasets, GS2E leverages 3D Gaussian Splatting (3DGS) to generate geometry-consistent photorealistic RGB frames from sparse camera poses, followed by physically-informed event simulation with adaptive contrast threshold modeling. The dataset enables scalable, controllable, and sensor-faithful generation of realistic event streams with aligned RGB and camera pose data.

Teaser of GS2E

📚 Dataset Description

Event cameras offer unique advantages—such as low latency, high temporal resolution, and high dynamic range—making them ideal for 3D reconstruction and SLAM under rapid motion and challenging lighting. However, the lack of large-scale, geometry-consistent event datasets has hindered the development of event-driven or hybrid RGB-event methods.

GS2E addresses this gap by synthesizing event data from sparse, static RGB images. Using 3D Gaussian Splatting (3DGS), we reconstruct high-fidelity 3D scenes and generate dense camera trajectories to render blur-free and motion-blurred sequences. These sequences are then processed by a physically-grounded event simulator, incorporating adaptive contrast thresholds that vary across scenes and motion profiles.

The dataset includes:

21 distinct scenes, each with 3 corresponding event sequences under varying blur levels (slight, medium, and severe)

  • Per-frame photorealistic RGB renderings (clean and motion-blurred)
  • Ground truth camera poses
  • Geometry-consistent synthetic event streams

The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.

If you use this synthetic event dataset for your work, please cite:

  TBD

Dataset Structure and Contents

This synthetic event dataset is organized by scene, with each scene directory containing synchronized multimodal data for RGB-event processing tasks. The data was derived from MVImgNet and processed via GS2E to generate high-quality event streams. Each scene includes the following elements:

Path / File Data Type Description
images/ RGB image sequence Sharp, high-resolution ground truth RGB frames
images_blur_<level>/ Blurred RGB image sequence Images with different degrees of artificial blur
sparse/ COLMAP sparse model Contains cameras.bin, images.bin, points3D.bin
events.h5 Event data (HDF5) Compressed event stream as (t, x, y, p)
  • The events.h5 file stores events in the format:
    [timestamp (μs), x (px), y (px), polarity (1/0)]
  • images_blur_<level>/ folders indicate increasing blur intensity.
  • sparse/ is generated by COLMAP and includes camera intrinsics and poses.

This structure enables joint processing of visual and event data for various tasks such as event-based deblurring, video reconstruction, and hybrid SfM pipelines.

Teaser of GS2E

Setup

  1. Install Git LFS according to the official instructions.
  2. Setup Git LFS for your user account with:
    git lfs install
    
  3. Clone this dataset repository into the desired destination directory with:
    git lfs clone https://huggingface.co/datasets/Falcary/GS2E
    
  4. To minimize disk usage, remove the .git/ folder. However, this would complicate the pulling of changes in this upstream dataset repository.

license: cc-by-4.0

Downloads last month
18