Datasets:
๐ฅ Multilingual Multimodal Medical Exam Dataset for Visual Question Answering in Healthcare
The Multilingual Multimodal Medical Exam Dataset (MMMED) is a comprehensive benchmark designed to evaluate Vision-Language Models (VLMs) on medical multiple-choice question answering (MCQA) tasks. This dataset combines medical images and multiple-choice questions in Spanish, English, and Italian, derived from the Mรฉdico Interno Residente (MIR) residency exams in Spain.
The dataset includes challenging, real-world medical content, with images from various diagnostic scenarios, making it ideal for assessing VLMs in cross-lingual medical tasks.
๐ How to Access the Dataset
You can access the MMMED dataset via Hugging Face. Follow these steps to download it:
from datasets import load_dataset
# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("praiselab-picuslab/MMMED")
๐ Key Features:
- Languages: ๐ช๐ธ Spanish, ๐ฌ๐ง English, ๐ฎ๐น Italian
- Medical Content: Questions based on real Spanish residency exams
- Image Types: Diagnostic medical images (e.g., CT scans, X-rays)
- Categories: 24 medical specialties (e.g., Digestive Surgery, Cardiology)
- Multimodal: Each question comes with a medical image ๐ธ
๐ ๏ธ Dataset Workflow
Here is the general workflow for building the MMMED dataset for Vision-Language Model (VLM) evaluation:
๐ Dataset Overview
The MMMED dataset contains 194 questions from the MIR exams and features images from real-world medical contexts. The dataset is organized into 24 medical categories, each with corresponding textual questions and image-based choices.
Statistic | ๐ช๐ธ Spanish | ๐ฌ๐ง English | ๐ฎ๐น Italian |
---|---|---|---|
# Questions | 194 | 194 | 194 |
# Categories | 24 | 24 | 24 |
Last Update | 2024 | 2024 | 2024 |
Avg. Option Length | 6.85 | 6.57 | 6.71 |
Max. Option Length | 41 | 39 | 39 |
Total Question Tokens | 10,898 | 10,213 | 10,545 |
Total Option Tokens | 5,644 | 5,417 | 5,528 |
Avg. Question Length | 56.18 | 52.64 | 54.36 |
Max. Question Length | 223 | 190 | 197 |
๐ผ๏ธ Image Types
Categorization of Image Types in the MMMED Dataset. This figure presents the four main categories of images included in the dataset and their respective distributions.
โจ Example MMCQA
Each multimodal multiple-choice question-answer (MMCQA) pair integrates three essential components with the following structure:
- Category: C
- Question: Q
- Image URL: I
- Answer Options: O
- Correct Answer: ๐ก
Hereโs an illustrative example of multimodal QA in three languages:
๐ List of Open-Source and Closed-Source Vision-Language Models (VLMs) Used
This table shows the parameter sizes, language models, vision models, and average scores of VLMs evaluated on the OpenVLM Leaderboard.
Rank | Method | Param (B) | Language Model | Vision Model | Avg Score (%) |
---|---|---|---|---|---|
Open-Source Models | |||||
167 | PaliGemma-3B-mix-448 | 3 | Gemma-2B | SigLIP-400M | 46.5 |
108 | DeepSeek-VL2-Tiny | 3.4 | DeepSeekMoE-3B | SigLIP-400M | 58.1 |
135 | Phi-3.5-Vision | 4 | Phi-3.5 | CLIP ViT-L/14 | 53.0 |
209 | LLaVA-v1.5-7B | 7.2 | Vicuna-v1.5-7B | CLIP ViT-L/14 | 36.9 |
Closed-Source Models | |||||
34 | Claude3.5-Sonnet-20241022 | Unknown | Closed-Source | Closed-Source | 70.6 |
24 | GPT-4o (1120, detail-high) | Unknown | Closed-Source | Closed-Source | 72.0 |
20 | Gemini-2.0-Flash | Unknown | Closed-Source | Closed-Source | 72.6 |
๐ VLM Performance on MMMED
The following figure presents the accuracy of different VLMs in each language tested:
๐ Notes
Dataset Usage: The dataset is intended for academic and research purposes only. It is not recommended for clinical decision-making or commercial use.
๐จโ๐ป This project was developed by Antonio Romano, Giuseppe Riccio, Mariano Barone, Gian Marco Orlando, Diego Russo, Marco Postiglione, and Vincenzo Moscato
University of Naples, Federico II
๐ License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
- Downloads last month
- 41