Analysing concatenation approaches to document-level NMT in two different domains

Yves Scherrer, Jörg Tiedemann, Sharid Loáiciga


Abstract
In this paper, we investigate how different aspects of discourse context affect the performance of recent neural MT systems. We describe two popular datasets covering news and movie subtitles and we provide a thorough analysis of the distribution of various document-level features in their domains. Furthermore, we train a set of context-aware MT models on both datasets and propose a comparative evaluation scheme that contrasts coherent context with artificially scrambled documents and absent context, arguing that the impact of discourse-aware MT models will become visible in this way. Our results show that the models are indeed affected by the manipulation of the test data, providing a different view on document-level translation quality than absolute sentence-level scores.
Anthology ID:
D19-6506
Volume:
Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
Month:
November
Year:
2019
Address:
Hong Kong, China
Venues:
DiscoMT | EMNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
51–61
Language:
URL:
https://www.aclweb.org/anthology/D19-6506
DOI:
10.18653/v1/D19-6506
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/D19-6506.pdf