Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -48,24 +48,136 @@ configs:
|
|
48 |
path: Eardial_downstream_task_datasets/Image_captioning/RSICD_Captions/RSICD_Captions_test/data-00000-of-00001.arrow
|
49 |
---
|
50 |
|
51 |
-
|
52 |
# π EarthDial-Dataset
|
53 |
|
54 |
-
The **EarthDial-Dataset** is a collection of
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
---
|
57 |
|
58 |
-
##
|
59 |
|
60 |
-
|
|
|
61 |
|
62 |
-
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
|
|
65 |
|
66 |
-
```python
|
67 |
from datasets import load_dataset
|
68 |
|
69 |
dataset = load_dataset(
|
70 |
"akshaydudhane/EarthDial-Dataset",
|
71 |
-
data_dir="EarthDial_downstream_task_datasets/Classification/AID/test"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
path: Eardial_downstream_task_datasets/Image_captioning/RSICD_Captions/RSICD_Captions_test/data-00000-of-00001.arrow
|
49 |
---
|
50 |
|
|
|
51 |
# π EarthDial-Dataset
|
52 |
|
53 |
+
The **EarthDial-Dataset** is a curated collection of evaluation-only datasets focused on remote sensing and Earth observation downstream tasks. It is designed to benchmark **vision-language models (VLMs)** and **multimodal reasoning systems** on real-world scenarios involving satellite and aerial imagery.
|
54 |
+
|
55 |
+
---
|
56 |
+
|
57 |
+
## π Key Features
|
58 |
+
|
59 |
+
- **Evaluation-focused**: All datasets are for inference/testing only β no train/val splits.
|
60 |
+
- **Diverse Tasks**:
|
61 |
+
- Classification
|
62 |
+
- Object Detection
|
63 |
+
- Change Detection
|
64 |
+
- Grounding Description
|
65 |
+
- Region Captioning
|
66 |
+
- Image Captioning
|
67 |
+
- Visual Question Answering (GeoChat Bench)
|
68 |
+
- **Remote Sensing Specific**: Tailored for multispectral, RGB, and high-resolution satellite data.
|
69 |
+
- **Multimodal Format**: Includes images, questions, captions, annotations, and geospatial metadata.
|
70 |
|
71 |
---
|
72 |
|
73 |
+
## ποΈ Dataset Structure
|
74 |
|
75 |
+
The dataset is structured under the root folder:
|
76 |
+
`EarthDial_downstream_task_datasets/`
|
77 |
|
78 |
+
Each task has its own subdirectory with `.arrow`-formatted shards, structured as:
|
79 |
|
80 |
+
```bash
|
81 |
+
EarthDial_downstream_task_datasets/
|
82 |
+
β
|
83 |
+
βββ Classification/
|
84 |
+
β βββ AID/
|
85 |
+
β β βββ test/data-00000-of-00001.arrow
|
86 |
+
β βββ ...
|
87 |
+
β
|
88 |
+
βββ Detection/
|
89 |
+
β βββ NWPU_VHR_10_test/
|
90 |
+
β βββ Swimming_pool_dataset_test/
|
91 |
+
β βββ ...
|
92 |
+
β
|
93 |
+
βββ Region_captioning/
|
94 |
+
β βββ NWPU_VHR_10_test_region_captioning/
|
95 |
+
β
|
96 |
+
βββ Image_captioning/
|
97 |
+
β βββ RSICD_Captions/
|
98 |
+
β βββ UCM_Captions/
|
99 |
+
β...
|
100 |
|
101 |
+
## ποΈ Example data usage
|
102 |
|
|
|
103 |
from datasets import load_dataset
|
104 |
|
105 |
dataset = load_dataset(
|
106 |
"akshaydudhane/EarthDial-Dataset",
|
107 |
+
data_dir="EarthDial_downstream_task_datasets/Classification/AID/test"
|
108 |
+
)
|
109 |
+
|
110 |
+
## Example Demo Usage
|
111 |
+
|
112 |
+
import argparse
|
113 |
+
import torch
|
114 |
+
from PIL import Image
|
115 |
+
from transformers import AutoTokenizer
|
116 |
+
from earthdial.model.internvl_chat import InternVLChatModel
|
117 |
+
from earthdial.train.dataset import build_transform
|
118 |
+
|
119 |
+
def run_single_inference(args):
|
120 |
+
# Load model and tokenizer from Hugging Face Hub
|
121 |
+
print(f"Loading model and tokenizer from Hugging Face: {args.checkpoint}")
|
122 |
+
tokenizer = AutoTokenizer.from_pretrained(args.checkpoint, trust_remote_code=True, use_fast=False)
|
123 |
+
model = InternVLChatModel.from_pretrained(
|
124 |
+
args.checkpoint,
|
125 |
+
low_cpu_mem_usage=True,
|
126 |
+
torch_dtype=torch.bfloat16,
|
127 |
+
device_map="auto" if args.auto else None,
|
128 |
+
load_in_8bit=args.load_in_8bit,
|
129 |
+
load_in_4bit=args.load_in_4bit
|
130 |
+
).eval()
|
131 |
+
|
132 |
+
if not args.load_in_8bit and not args.load_in_4bit and not args.auto:
|
133 |
+
model = model.cuda()
|
134 |
+
|
135 |
+
# Load and preprocess image
|
136 |
+
image = Image.open(args.image_path).convert("RGB")
|
137 |
+
image_size = model.config.force_image_size or model.config.vision_config.image_size
|
138 |
+
transform = build_transform(is_train=False, input_size=image_size, normalize_type='imagenet')
|
139 |
+
pixel_values = transform(image).unsqueeze(0).cuda().to(torch.bfloat16)
|
140 |
+
|
141 |
+
# Generate answer
|
142 |
+
generation_config = {
|
143 |
+
"num_beams": args.num_beams,
|
144 |
+
"max_new_tokens": 100,
|
145 |
+
"min_new_tokens": 1,
|
146 |
+
"do_sample": args.temperature > 0,
|
147 |
+
"temperature": args.temperature,
|
148 |
+
}
|
149 |
+
|
150 |
+
answer = model.chat(
|
151 |
+
tokenizer=tokenizer,
|
152 |
+
pixel_values=pixel_values,
|
153 |
+
question=args.question,
|
154 |
+
generation_config=generation_config,
|
155 |
+
verbose=True
|
156 |
+
)
|
157 |
+
|
158 |
+
print("\n=== Inference Result ===")
|
159 |
+
print(f"Question: {args.question}")
|
160 |
+
print(f"Answer: {answer}")
|
161 |
+
|
162 |
+
if __name__ == "__main__":
|
163 |
+
parser = argparse.ArgumentParser()
|
164 |
+
parser.add_argument('--checkpoint', type=str, required=True, help='Model repo ID on Hugging Face Hub')
|
165 |
+
parser.add_argument('--image-path', type=str, required=True, help='Path to local input image')
|
166 |
+
parser.add_argument('--question', type=str, required=True, help='Question to ask about the image')
|
167 |
+
parser.add_argument('--num-beams', type=int, default=5)
|
168 |
+
parser.add_argument('--temperature', type=float, default=0.0)
|
169 |
+
parser.add_argument('--load-in-8bit', action='store_true')
|
170 |
+
parser.add_argument('--load-in-4bit', action='store_true')
|
171 |
+
parser.add_argument('--auto', action='store_true')
|
172 |
+
|
173 |
+
args = parser.parse_args()
|
174 |
+
run_single_inference(args)
|
175 |
+
|
176 |
+
|
177 |
+
|
178 |
+
python demo_infer.py \
|
179 |
+
--checkpoint akshaydudhane/EarthDial_4B_RGB \
|
180 |
+
--image-path ./test.jpg \
|
181 |
+
--question "Which road has more vehicles?" \
|
182 |
+
--auto
|
183 |
+
|