A Massively Multilingual Multimodal Evaluation Dataset

Ashish Thapliyal
Google Research
Jordi Pont-Tuset
Google Research
Xi Chen
Google Research
Radu Soricut
Google Research


Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
Ashish Thapliyal, Jordi Pont-Tuset, Xi Chen, and Radu Soricut
EMNLP, 2022
[PDF] [arXiv] [BibTeX]
  author        = {Ashish Thapliyal and Jordi Pont-Tuset and Xi Chen and Radu Soricut},
  title         = {{Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset}},
  booktitle     = {EMNLP},
  year          = {2022}
Copy to clipboard Close


Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically-diverse set of 3600 images annotated with human-generated reference captions in 36 languages. The images were selected from across the world, covering regions where the 36 languages are spoken, and annotated with captions that achieve consistency in terms of style across all languages, while avoiding annotation artifacts due to direct translation. We apply this benchmark to model selection for massively multilingual image captioning models, and show strong correlation results with human evaluations when using XM3600 as golden references for automatic metrics.


Open Annotation Visualizer


Captions* (17 MB) Images (302 MB) Image Attributions (1.3 MB)

* The annotations are licensed by Google LLC under CC BY 4.0 license.

Machine Translations of Other Datasets

The original captions are from Conceptual Captions (CC3M) and COCO Captions. The back translations are provided to allow for a rough estimation of the translation quality. See the README for further information about license and format.
CC3M Train (7 GB) CC3M Dev (33 MB) COCO Train (567 MB) COCO Dev (25 MB)
Close visualizer

Crossmodal-3600 Visualizer

Next image Previous image Random image Show help

Search functions:

Navigation functions: