Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type struct<image_id: int64, file_name: string, image_path: string, category_id: int64, object_id: int64, captions: list<item: string>> to {'image_id': Value(dtype='int64', id=None), 'file_name': Value(dtype='string', id=None), 'image_path': Value(dtype='string', id=None), 'category_id': Value(dtype='int64', id=None), 'object_id': Value(dtype='int64', id=None), 'caption': Value(dtype='string', id=None)} Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 623, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in cast_table_to_schema arrays = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2247, in <listcomp> cast_array_to_feature( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2014, in cast_array_to_feature casted_array_values = _c(array.values, feature[0]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2109, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<image_id: int64, file_name: string, image_path: string, category_id: int64, object_id: int64, captions: list<item: string>> to {'image_id': Value(dtype='int64', id=None), 'file_name': Value(dtype='string', id=None), 'image_path': Value(dtype='string', id=None), 'category_id': Value(dtype='int64', id=None), 'object_id': Value(dtype='int64', id=None), 'caption': Value(dtype='string', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1438, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1898, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
categories
list | annotations
list | images
list |
---|---|---|
[{"id":0,"name":"curtain"},{"id":1,"name":"short"},{"id":2,"name":"fence"},{"id":3,"name":"sock"},{"(...TRUNCATED) | [{"image_id":1,"file_name":"1.jpg","image_path":"vg/images/1.jpg","category_id":191,"object_id":1019(...TRUNCATED) | [{"image_id":1,"category_ids":[224,153,228,244,183,152,249,187,191]},{"image_id":2,"category_ids":[9(...TRUNCATED) |
MOInst Dataset (Multi-Object Instruction)
Overview
MOInst (Multi-Object Instruction) is a specialized dataset created for training and evaluating computer vision models on their ability to perceive overlooked information in images. The dataset is built by carefully selecting and annotating images from the Visual Genome dataset, focusing on object instances that are typically missed by standard vision models.
Relation to GiVE
This dataset was developed as part of the research presented in "GiVE: Guiding Visual Encoder to Perceive Overlooked Information" (ICME 2025). GiVE is a framework designed to enhance vision models by guiding them to perceive information that might be overlooked. The MOInst dataset serves as both a training resource and evaluation benchmark for this purpose.
Dataset Structure
The dataset is organized into:
- Training set:
MOInst_train.json
- Contains annotated images for model training - Test set:
MOInst_test.json
- Used for evaluation
Data Source
All images and captions in the MOInst dataset are sourced from the Visual Genome dataset. We further annotated the focused objects.
Usage
The MOInst dataset is designed for:
- Training vision models to better perceive overlooked information in images
- Evaluating model performance on identifying missed object instances
- Benchmarking vision-language models on fine-grained visual understanding tasks
Citation
If you use this dataset in your research, please cite the GiVE paper:
@article{DBLP:journals/corr/abs-2410-20109,
author = {Junjie Li and
Jianghong Ma and
Xiaofeng Zhang and
Yuhang Li and
Jianyang Shi},
title = {GiVE: Guiding Visual Encoder to Perceive Overlooked Information},
journal = {CoRR},
volume = {abs/2410.20109},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2410.20109},
doi = {10.48550/ARXIV.2410.20109},
eprinttype = {arXiv},
eprint = {2410.20109},
timestamp = {Thu, 28 Nov 2024 21:32:45 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2410-20109.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Downloads last month
- 16