Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
132
5.76k
End of preview. Expand in Data Studio

MME-Industry: A Cross-Industry Multimodal Evaluation Benchmark

πŸ“– Overview

MME-Industry is a meticulously curated benchmark designed to comprehensively evaluate the performance of Multimodal Large Language Models (MLLMs) across diverse industrial applications. This benchmark aims to fill the critical gap in assessing MLLMs' capabilities in specialized real-world scenarios, providing valuable insights for model optimization and practical deployment.

Key Features

1. Comprehensive Industrial Coverage

MME-Industry spans 21 distinct industrial sectors, including power generation, electronics manufacturing, textile production, steel industry, chemical processing, and more. Each sector contains 50 carefully designed question-answer pairs, resulting in a total of 1,050 high-quality QA samples. alt text

2. Expert-Validated Content

All QA pairs in MME-Industry are manually crafted and validated by domain experts to ensure data integrity and practical relevance. This rigorous validation process guarantees the accuracy and industry-specificity of the benchmark, minimizing the risk of data leakage from public datasets.

3. Multilingual Support

MME-Industry provides both English and Chinese versions of the benchmark, enabling comparative analysis of MLLMs' capabilities across these languages. This dual-language setup supports cross-lingual research initiatives and caters to the needs of a broader research community.

4. Non-OCR Questions

To enhance the challenge level and practical relevance, MME-Industry incorporates questions that cannot be answered through simple OCR-based text recognition. Instead, these questions require specialized domain knowledge and reasoning skills, ensuring a fair evaluation of models' multimodal capabilities.

5. Rich Data Format

Each sample in MME-Industry is structured with rich information to support various research applications:

  • Image: High-resolution industrial images with an average resolution of 1110Γ—859 pixels.
  • Question: A well-defined question related to the image.
  • Answer: The correct answer to the question.
  • Options: Multiple-choice options (including a "reject" option "E" if the model cannot recognize the features in the image).
  • Domain: Hierarchical classification of the industrial sector.

πŸ“Š Main Results

We have conducted extensive evaluations of multiple state-of-the-art MLLMs on the MME-Industry benchmark. The results highlight significant variations in model performance across different industrial domains and languages. For example:

  • Qwen2-VL-72B-Instruct achieved the highest overall accuracy of 78.66% in Chinese and 75.04% in English.
  • Claude-3.5-Sonnet consistently ranked second, with scores of 74.09% in Chinese and 72.66% in English.
  • MiniCPM-V-2.6 exhibited notable performance gaps between Chinese (18.47%) and English (29.04%), indicating challenges in cross-lingual understanding.

The detailed results across 21 industries are summarized in Tables 4 and 5 of the paper. alt text

πŸ™ Acknowledgements

This work would not have been possible without the contributions of the following:

  • Domain Experts: For their invaluable insights and validation efforts.
  • Wuhan AI Research and Institute of Automation, Chinese Academy of Sciences: For their support and collaboration.

πŸ“š Citation

If you find MME-Industry useful for your research, please cite the following paper:

@article{yi2025mmeindustry,
  title={MME-Industry: A Cross-Industry Multimodal Evaluation Benchmark},
  author={Yi, Dongyi and Zhu, Guibo and Ding, Chenglin and Li, Zongshu and Yi, Dong and Wang, Jinqiao},
  journal={arXiv preprint arXiv:2501.16688},
  year={2025}
}

Future Work

We are committed to continuously improving MME-Industry. Future directions include:

  • Expanding the Dataset Scale: To enhance coverage and diversity.
  • Increasing the Number of Tested Models: To ensure comprehensive evaluation.
  • Establishing Open-Source Platforms: To foster community engagement.
  • Implementing Continuous Evaluation Mechanisms: To keep pace with the rapid evolution of industrial AI technologies.

We welcome contributions and feedback from the research community to help us achieve these goals.

Downloads last month
77