CS-Sum: A Benchmark for Code-Switching Dialogue Summarization and the Limits of Large Language Models
Abstract
LLMs exhibit high automated metric scores but make subtle errors in code-switching dialogue summarization, highlighting the need for specialized training on CS data.
Code-switching (CS) poses a significant challenge for Large Language Models (LLMs), yet its comprehensibility remains underexplored in LLMs. We introduce CS-Sum, to evaluate the comprehensibility of CS by the LLMs through CS dialogue to English summarization. CS-Sum is the first benchmark for CS dialogue summarization across Mandarin-English (EN-ZH), Tamil-English (EN-TA), and Malay-English (EN-MS), with 900-1300 human-annotated dialogues per language pair. Evaluating ten LLMs, including open and closed-source models, we analyze performance across few-shot, translate-summarize, and fine-tuning (LoRA, QLoRA on synthetic data) approaches. Our findings show that though the scores on automated metrics are high, LLMs make subtle mistakes that alter the complete meaning of the dialogue. To this end, we introduce 3 most common type of errors that LLMs make when handling CS input. Error rates vary across CS pairs and LLMs, with some LLMs showing more frequent errors on certain language pairs, underscoring the need for specialized training on code-switched data.
Community
What happens when you talk to ChatGPT or other LLMs using a mix of languages — like English + Mandarin, Malay, or Tamil — in the same sentence? Our paper CS-Sum, from NTU is now on arXiv. We introduce the first benchmark for summarizing multilingual dialogues where speakers naturally switch between languages mid-conversation. With over 900 annotated examples per language pair, we evaluate 10 LLMs and show that even top models often misinterpret, skip key info, or confuse speakers. If you're building AI for real-world, multilingual settings — this might be for you
Thia is really great!!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Sample-Efficient Language Model for Hinglish Conversational AI (2025)
- XL-Instruct: Synthetic Data for Cross-Lingual Open-Ended Generation (2025)
- Bridging the Linguistic Divide: A Survey on Leveraging Large Language Models for Machine Translation (2025)
- An Empirical Study of Many-to-Many Summarization with Large Language Models (2025)
- Investigating and Scaling up Code-Switching for Multilingual Language Model Pre-Training (2025)
- Is LLM the Silver Bullet to Low-Resource Languages Machine Translation? (2025)
- IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper