Papers
arxiv:2502.12404

WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects

Published on Feb 18
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, including on tasks like machine translation (MT). In this work, we extend the WMT24 dataset to cover 55 languages by collecting new human-written references and post-edits for 46 new languages and dialects in addition to post-edits of the references in 8 out of 9 languages in the original WMT24 dataset. The dataset covers four domains: literary, news, social, and speech. We benchmark a variety of MT providers and LLMs on the collected dataset using automatic metrics and find that LLMs are the best-performing MT systems in all 55 languages. These results should be confirmed using a human-based evaluation, which we leave for future work.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 47

Browse 47 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 45

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.