Charles Explorer logo
🇬🇧

CUNI System for the WMT17 Multimodal Translation Task

Publication at Faculty of Mathematics and Physics |
2017

Abstract

In this paper, we describe our submissions to the WMT17 Multimodal Translation Task. For Task 1 (multimodal translation), our best scoring system is a purely textual neural translation of the source image caption to the target language.

The main feature of the system is the use of additional data that was acquired by selecting similar sentences from parallel corpora and by data synthesis with back-translation. For Task 2 (cross-lingual image captioning), our best submitted system generates an English caption which is then translated by the best system used in Task 1.

We also present negative results, which are based on ideas that we believe have potential of making improvements, but did not prove to be useful in our particular setup.