Update README.md
#1
by
INTOTHEMILD
- opened
README.md
CHANGED
@@ -20,11 +20,11 @@ GS2E addresses this gap by synthesizing event data from sparse, static RGB image
|
|
20 |
|
21 |
The dataset includes:
|
22 |
|
23 |
-
* **21
|
|
|
24 |
* Per-frame photorealistic RGB renderings (clean and motion-blurred)
|
25 |
* Ground truth camera poses
|
26 |
* Geometry-consistent synthetic event streams
|
27 |
-
* Consistent intrinsics and camera paths with NeRF Synthetic Dataset
|
28 |
|
29 |
The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.
|
30 |
|
@@ -37,9 +37,24 @@ If you use this synthetic event dataset for your work, please cite:
|
|
37 |
|
38 |
## Dataset Structure and Contents
|
39 |
|
40 |
-
This synthetic event dataset is organized
|
41 |
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
| :--- | :--- | :--- |
|
44 |
| `/cam0/events` | Events | - |
|
45 |
| `/cam0/pose` | Camera Pose | 1000 |
|
@@ -52,7 +67,7 @@ This synthetic event dataset is organized first by scene, then by level of diffi
|
|
52 |
It is obtained by running the improved ESIM with the associated `esim.conf` configuration file, which references camera intrinsics configuration files `pinhole_mono_nodistort_f={1111, 1250}.yaml` and camera trajectory CSV files `{hemisphere, sphere}_spiral-rev=4[...].csv`.
|
53 |
|
54 |
The validation and test views of each scene are given in the `views/` folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the `scenes/` folder. Specifically, we create a [Conda](https://docs.conda.io/en/latest/) environment with [Blender as a Python module](https://docs.blender.org/api/current/info_advanced_blender_as_bpy.html) installed, according to [these instructions](https://github.com/wengflow/rpg_esim#blender), to run the `bpy_render_views.py` Python script for rendering the evaluation views.
|
55 |
-
|
56 |
|
57 |
## Setup
|
58 |
|
|
|
20 |
|
21 |
The dataset includes:
|
22 |
|
23 |
+
* **21 distinct scenes**, each with **3** corresponding event sequences under **varying blur levels** (slight, medium, and severe)
|
24 |
+
<!-- * **21 multi-view event sequences** across **7 scenes** and **3 blur levels** (slight/medium/severe) -->
|
25 |
* Per-frame photorealistic RGB renderings (clean and motion-blurred)
|
26 |
* Ground truth camera poses
|
27 |
* Geometry-consistent synthetic event streams
|
|
|
28 |
|
29 |
The result is a simulation-friendly yet physically-informed dataset for training and evaluating event-based 3D reconstruction, localization, SLAM, and novel view synthesis.
|
30 |
|
|
|
37 |
|
38 |
## Dataset Structure and Contents
|
39 |
|
40 |
+
This synthetic event dataset is organized by scene, with each scene directory containing synchronized multimodal data for RGB-event processing tasks. The data was derived from MVImgNet and processed via GS2E to generate high-quality event streams. Each scene includes the following elements:
|
41 |
|
42 |
+
| Path / File | Data Type | Description |
|
43 |
+
|-----------------------------|---------------------------|------------------------------------------------------|
|
44 |
+
| `images/` | RGB image sequence | Sharp, high-resolution ground truth RGB frames |
|
45 |
+
| `images_blur_<level>/` | Blurred RGB image sequence| Images with different degrees of artificial blur |
|
46 |
+
| `sparse/` | COLMAP sparse model | Contains `cameras.txt`, `images.txt`, `points3D.txt` |
|
47 |
+
| `events.h5` | Event data (HDF5) | Compressed event stream as (t, x, y, p) |
|
48 |
+
|
49 |
+
- The `events.h5` file stores events in the format:
|
50 |
+
`[timestamp (μs), x (px), y (px), polarity (1/0)]`
|
51 |
+
- `images_blur_<level>/` folders indicate increasing blur intensity.
|
52 |
+
- `sparse/` is generated by COLMAP and includes camera intrinsics and poses.
|
53 |
+
|
54 |
+
This structure enables joint processing of visual and event data for various tasks such as event-based deblurring, video reconstruction, and hybrid SfM pipelines.
|
55 |
+
|
56 |
+
|
57 |
+
<!-- | ROS Topic | Data | Publishing Rate (Hz) |
|
58 |
| :--- | :--- | :--- |
|
59 |
| `/cam0/events` | Events | - |
|
60 |
| `/cam0/pose` | Camera Pose | 1000 |
|
|
|
67 |
It is obtained by running the improved ESIM with the associated `esim.conf` configuration file, which references camera intrinsics configuration files `pinhole_mono_nodistort_f={1111, 1250}.yaml` and camera trajectory CSV files `{hemisphere, sphere}_spiral-rev=4[...].csv`.
|
68 |
|
69 |
The validation and test views of each scene are given in the `views/` folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the `scenes/` folder. Specifically, we create a [Conda](https://docs.conda.io/en/latest/) environment with [Blender as a Python module](https://docs.blender.org/api/current/info_advanced_blender_as_bpy.html) installed, according to [these instructions](https://github.com/wengflow/rpg_esim#blender), to run the `bpy_render_views.py` Python script for rendering the evaluation views.
|
70 |
+
-->
|
71 |
|
72 |
## Setup
|
73 |
|