EarthDial-Dataset / README.md
akshaydudhane's picture
Update README.md
dead97c verified
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
tags:
  - remote_sensing
  - vlm
size_categories:
  - 10K<n<100K
configs:
  - config_name: Classification
    data_files:
      - split: AID_sample
        path: >-
          Eardial_downstream_task_datasets/Classification/AID/test/data-00000-of-00001.arrow
      - split: UCM_sample
        path: >-
          Eardial_downstream_task_datasets/Classification/UCM/data-00000-of-00001.arrow
      - split: BigEarthNet_RGB_sample
        path: >-
          Eardial_downstream_task_datasets/Classification/BigEarthNet_RGB/BigEarthNet_test/data-00000-of-00004.arrow
      - split: WHU_sample
        path: >-
          Eardial_downstream_task_datasets/Classification/WHU_19/data-00000-of-00001.arrow
  - config_name: GeoChat_Bench
    data_files:
      - split: GeoChat_Bench_sample
        path: >-
          Eardial_downstream_task_datasets/Detection/Geochat_Bench/data-00000-of-00001.arrow
  - config_name: Detection
    data_files:
      - split: NWPU_VHR_10_sample
        path: >-
          Eardial_downstream_task_datasets/Detection/NWPU_VHR_10_test/data-00000-of-00001.arrow
      - split: Swimming_pool_dataset_sample
        path: >-
          Eardial_downstream_task_datasets/Detection/Swimming_pool_dataset_test/data-00000-of-00001.arrow
      - split: ship_dataset_v0_sample
        path: >-
          Eardial_downstream_task_datasets/Detection/ship_dataset_v0_test/data-00000-of-00001.arrow
      - split: urban_tree_crown_sample
        path: >-
          Eardial_downstream_task_datasets/Detection/urban_tree_crown_detection_test/data-00000-of-00001.arrow
  - config_name: Region_captioning
    data_files:
      - split: NWPU_VHR_10
        path: >-
          Eardial_downstream_task_datasets/Region_captioning/NWPU_VHR_10_test_region_captioning/data-00000-of-00001.arrow
  - config_name: Image_captioning
    data_files:
      - split: sydney_Captions
        path: >-
          Eardial_downstream_task_datasets/Image_captioning/sydney_Captions/sydney_Captions_test/data-00000-of-00001.arrow
      - split: UCM_Captions
        path: >-
          Eardial_downstream_task_datasets/Image_captioning/UCM_Captions/UCM_Captions_test/data-00000-of-00001.arrow
      - split: RSICD_Captions
        path: >-
          Eardial_downstream_task_datasets/Image_captioning/RSICD_Captions/RSICD_Captions_test/data-00000-of-00001.arrow

🌍 EarthDial-Dataset

The EarthDial-Dataset is a curated collection of evaluation-only datasets focused on remote sensing and Earth observation downstream tasks. It is designed to benchmark vision-language models (VLMs) and multimodal reasoning systems on real-world scenarios involving satellite and aerial imagery.


πŸ“š Key Features

  • Evaluation-focused: All datasets are for inference/testing only β€” no train/val splits.
  • Diverse Tasks:
    • Classification
    • Object Detection
    • Change Detection
    • Grounding Description
    • Region Captioning
    • Image Captioning
    • Visual Question Answering (GeoChat Bench)
  • Remote Sensing Specific: Tailored for multispectral, RGB, and high-resolution satellite data.
  • Multimodal Format: Includes images, questions, captions, annotations, and geospatial metadata.

πŸ—‚οΈ Dataset Structure

The dataset is structured under the root folder:
EarthDial_downstream_task_datasets/

Each task has its own subdirectory with .arrow-formatted shards, structured as:

EarthDial_downstream_task_datasets/
β”‚
β”œβ”€β”€ Classification/
β”‚   β”œβ”€β”€ AID/
β”‚   β”‚   └── test/data-00000-of-00001.arrow
β”‚   └── ...
β”‚
β”œβ”€β”€ Detection/
β”‚   β”œβ”€β”€ NWPU_VHR_10_test/
β”‚   β”œβ”€β”€ Swimming_pool_dataset_test/
β”‚   └── ...
β”‚
β”œβ”€β”€ Region_captioning/
β”‚   └── NWPU_VHR_10_test_region_captioning/
β”‚
β”œβ”€β”€ Image_captioning/
β”‚   β”œβ”€β”€ RSICD_Captions/
β”‚   └── UCM_Captions/
β”‚...

## πŸ—‚οΈ Example data usage

from datasets import load_dataset

dataset = load_dataset(
    "akshaydudhane/EarthDial-Dataset",
    data_dir="EarthDial_downstream_task_datasets/Classification/AID/test"
)

## Example Demo Usage

import argparse
import torch
from PIL import Image
from transformers import AutoTokenizer
from earthdial.model.internvl_chat import InternVLChatModel
from earthdial.train.dataset import build_transform

def run_single_inference(args):
    # Load model and tokenizer from Hugging Face Hub
    print(f"Loading model and tokenizer from Hugging Face: {args.checkpoint}")
    tokenizer = AutoTokenizer.from_pretrained(args.checkpoint, trust_remote_code=True, use_fast=False)
    model = InternVLChatModel.from_pretrained(
        args.checkpoint,
        low_cpu_mem_usage=True,
        torch_dtype=torch.bfloat16,
        device_map="auto" if args.auto else None,
        load_in_8bit=args.load_in_8bit,
        load_in_4bit=args.load_in_4bit
    ).eval()

    if not args.load_in_8bit and not args.load_in_4bit and not args.auto:
        model = model.cuda()

    # Load and preprocess image
    image = Image.open(args.image_path).convert("RGB")
    image_size = model.config.force_image_size or model.config.vision_config.image_size
    transform = build_transform(is_train=False, input_size=image_size, normalize_type='imagenet')
    pixel_values = transform(image).unsqueeze(0).cuda().to(torch.bfloat16)

    # Generate answer
    generation_config = {
        "num_beams": args.num_beams,
        "max_new_tokens": 100,
        "min_new_tokens": 1,
        "do_sample": args.temperature > 0,
        "temperature": args.temperature,
    }

    answer = model.chat(
        tokenizer=tokenizer,
        pixel_values=pixel_values,
        question=args.question,
        generation_config=generation_config,
        verbose=True
    )

    print("\n=== Inference Result ===")
    print(f"Question: {args.question}")
    print(f"Answer: {answer}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument('--checkpoint', type=str, required=True, help='Model repo ID on Hugging Face Hub')
    parser.add_argument('--image-path', type=str, required=True, help='Path to local input image')
    parser.add_argument('--question', type=str, required=True, help='Question to ask about the image')
    parser.add_argument('--num-beams', type=int, default=5)
    parser.add_argument('--temperature', type=float, default=0.0)
    parser.add_argument('--load-in-8bit', action='store_true')
    parser.add_argument('--load-in-4bit', action='store_true')
    parser.add_argument('--auto', action='store_true')

    args = parser.parse_args()
    run_single_inference(args)



python demo_infer.py \
  --checkpoint akshaydudhane/EarthDial_4B_RGB \
  --image-path ./test.jpg \
  --question "Which road has more vehicles?" \
  --auto