MMToM-QA / README.md
Chuanyang-Jin's picture
Update README.md
e251e0e verified
metadata
license: mit
language:
  - en
task_categories:
  - question-answering
tags:
  - Multimodal
  - Theory_of_Mind
size_categories:
  - n<1K

MMToM-QA: Multimodal Theory of Mind Question Answering
🏆 Outstanding Paper Award at ACL 2024

[🏠Homepage] [💻Code] [📝Paper]

MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds. It systematically evaluates Theory of Mind both on multimodal data and different unimodal data. MMToM-QA consists of 600 questions. The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations. Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.

Currently, only the text-only version of MMToM-QA is available on Hugging Face. For the multimodal or video-only versions, please visit the GitHub repository: https://github.com/chuanyangjin/MMToM-QA

Leaderboard

Here is the leaderboard for MMToM-QA. Please contact us if you'd like to add your results.

Citation

Please cite the paper if you find it interesting/useful, thanks!

@article{jin2024mmtom,
  title={Mmtom-qa: Multimodal theory of mind question answering},
  author={Jin, Chuanyang and Wu, Yutong and Cao, Jing and Xiang, Jiannan and Kuo, Yen-Ling and Hu, Zhiting and Ullman, Tomer and Torralba, Antonio and Tenenbaum, Joshua B and Shu, Tianmin},
  journal={arXiv preprint arXiv:2401.08743},
  year={2024}
}