A Reinforced Generation of Adversarial Examples for Neural Machine Translation

Wei Zou, Shujian Huang, Jun Xie, Xinyu Dai, Jiajun Chen


Abstract
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure.
Anthology ID:
2020.acl-main.319
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3486–3497
Language:
URL:
https://www.aclweb.org/anthology/2020.acl-main.319
DOI:
10.18653/v1/2020.acl-main.319
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.acl-main.319.pdf
Software:
 2020.acl-main.319.Software.zip
Dataset:
 2020.acl-main.319.Dataset.pdf
Video:
 http://slideslive.com/38929000