Crossmodal-3600
A Massively Multilingual Multimodal Evaluation Dataset
Publication
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
Ashish Thapliyal, Jordi Pont-Tuset, Xi Chen, and Radu Soricut
EMNLP, 2022
[
PDF] [
arXiv] [
BibTeX]
@inproceedings{ThapliyalCrossmodal2022,
author = {Ashish Thapliyal and Jordi Pont-Tuset and Xi Chen and Radu Soricut},
title = {{Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset}},
booktitle = {EMNLP},
year = {2022}
}
Copy to clipboard
Close
Abstract
Research in massively multilingual image captioning has been severely hampered by a lack of
high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset
(XM3600 in short), a geographically-diverse set of 3600 images annotated with
human-generated reference captions in 36 languages. The images were selected from across the
world, covering regions where the 36 languages are spoken, and annotated with captions that
achieve consistency in terms of style across all languages, while avoiding annotation
artifacts due to direct translation. We apply this benchmark to model selection for
massively multilingual image captioning models, and show strong correlation results with
human evaluations when using XM3600 as golden references for automatic metrics.
Explore
Open Annotation Visualizer
Downloads
Machine Translations of Other Datasets
The original captions are from
Conceptual Captions (CC3M)
and
COCO Captions. The back translations are provided to allow for a rough estimation of the translation
quality. See the
README
for further information about license and format.