Synthesizing Parallel Data of User-Generated Texts with Zero-Shot Neural Machine Translation

Benjamin Marie, Atsushi Fujita


Abstract
Neural machine translation (NMT) systems are usually trained on clean parallel data. They can perform very well for translating clean in-domain texts. However, as demonstrated by previous work, the translation quality significantly worsens when translating noisy texts, such as user-generated texts (UGT) from online social media. Given the lack of parallel data of UGT that can be used to train or adapt NMT systems, we synthesize parallel data of UGT, exploiting monolingual data of UGT through crosslingual language model pre-training and zero-shot NMT systems. This paper presents two different but complementary approaches: One alters given clean parallel data into UGT-like parallel data whereas the other generates translations from monolingual data of UGT. On the MTNT translation tasks, we show that our synthesized parallel data can lead to better NMT systems for UGT while making them more robust in translating texts from various domains and styles.
Anthology ID:
2020.tacl-1.46
Volume:
Transactions of the Association for Computational Linguistics, Volume 8
Month:
Year:
2020
Address:
Venue:
TACL
SIG:
Publisher:
Note:
Pages:
710–725
Language:
URL:
https://www.aclweb.org/anthology/2020.tacl-1.46
DOI:
10.1162/tacl_a_00341
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.tacl-1.46.pdf