A Study in Improving BLEU Reference Coverage with Diverse Automatic Paraphrasing

Rachel Bawden, Biao Zhang, Lisa Yankovskaya, Andre Tättar, Matt Post


Abstract
We investigate a long-perceived shortcoming in the typical use of BLEU: its reliance on a single reference. Using modern neural paraphrasing techniques, we study whether automatically generating additional *diverse* references can provide better coverage of the space of valid translations and thereby improve its correlation with human judgments. Our experiments on the into-English language directions of the WMT19 metrics task (at both the system and sentence level) show that using paraphrased references does generally improve BLEU, and when it does, the more diverse the better. However, we also show that better results could be achieved if those paraphrases were to specifically target the parts of the space most relevant to the MT outputs being evaluated. Moreover, the gains remain slight even when human paraphrases are used, suggesting inherent limitations to BLEU’s capacity to correctly exploit multiple references. Surprisingly, we also find that adequacy appears to be less important, as shown by the high results of a strong sampling approach, which even beats human paraphrases when used with sentence-level BLEU.
Anthology ID:
2020.findings-emnlp.82
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
918–932
Language:
URL:
https://www.aclweb.org/anthology/2020.findings-emnlp.82
DOI:
10.18653/v1/2020.findings-emnlp.82
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.findings-emnlp.82.pdf
Optional supplementary material:
 2020.findings-emnlp.82.OptionalSupplementaryMaterial.txt