Datasets:

Modalities:
Text
Languages:
English
License:
ECF / README.md
ffwang's picture
Update README.md
bdf6f3b verified
metadata
license: gpl-3.0
language:
  - en
tags:
  - emotion-cause-analysis

Emotion-Cause-in-Friends (ECF)

For the task named Multimodal Emotion-Cause Pair Extraction in Conversation, we accordingly construct a multimodal conversational emotion cause dataset ECF (1.0). For SemEval 2024 task 3, we have furthermore annotated an extended test set as the evaluation data. Note: ECF (1.0) and the extended test set for SemEval evaluation constitute ECF 2.0.

For more details, please refer to our GitHub:

Dataset Statistics

Item Train Dev Test Total Evaluation Data for SemEval-2024 Task 3
Conversations 1001 112 261 1,374 341
Utterances 9,966 1,087 2,566 13,619 3,101
Emotion (utterances) 5,577 668 1,445 7,690 1,821
Emotion-cause (utterance) pairs 7,055 866 1,873 9,794 2,462

Supported Tasks

  • Multimodal Emotion Recognition in Conversation (ERC)
  • Causal/Cause Span Extraction (CSE)
  • Emotion Cause Extraction (ECE) / Causal Emotion Entailment (CEE)
  • Multimodal Emotion-Cause Pair Extraction in Conversation (MECPE)
  • ...

About Multimodal Data

⚠️ Due to potential copyright issues with the TV show "Friends", we cannot provide pre-segmented video clips.

If you need to utilize multimodal data, you may consider the following options:

  1. Use the acoustic and visual features we provide:

    • audio_embedding_6373.npy: the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
    • video_embedding_4096.npy: the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN
    • Please note that the above features only include the original ECF (1.0) dataset; the SemEval evaluation data is not included. If needed, you can contact us, and we will do our best to release new features.
  2. You can download the raw video clips from MELD. Since ECF (1.0) is constructed based on the MELD dataset, most utterances in ECF (1.0) correspond to those in MELD. The correspondence can be found in the last column of the file all_data_pair_ECFvsMELD.txt. However, we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances. Therefore, some timestamps provided in ECF (1.0) have been corrected and may differ from those in MELD. There are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.

  3. Download the raw videos of Friends from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide.

Citation

If you find ECF useful for your research, please cite our paper using the following BibTeX entries:

@ARTICLE{wang2023multimodal,
  author={Wang, Fanfan and Ding, Zixiang and Xia, Rui and Li, Zhaoyu and Yu, Jianfei},
  journal={IEEE Transactions on Affective Computing}, 
  title={Multimodal Emotion-Cause Pair Extraction in Conversations}, 
  year={2023},
  volume={14},
  number={3},
  pages={1832-1844},
  doi = {10.1109/TAFFC.2022.3226559}
}

@InProceedings{wang2024SemEval,
  author={Wang, Fanfan  and  Ma, Heqing  and  Xia, Rui  and  Yu, Jianfei  and  Cambria, Erik},
  title={SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations},
  booktitle={Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)},
  month={June},
  year={2024},
  address={Mexico City, Mexico},
  publisher={Association for Computational Linguistics},
  pages={2022--2033},
  url = {https://aclanthology.org/2024.semeval2024-1.273}
}