End-to-End Speech Translation with Adversarial Training

Xuancai Li, Chen Kehai, Tiejun Zhao, Muyun Yang


Abstract
End-to-End speech translation usually leverages audio-to-text parallel data to train an available speech translation model which has shown impressive results on various speech translation tasks. Due to the artificial cost of collecting audio-to-text parallel data, the speech translation is a natural low-resource translation scenario, which greatly hinders its improvement. In this paper, we proposed a new adversarial training method to leverage target monolingual data to relieve the low-resource shortcoming of speech translation. In our method, the existing speech translation model is considered as a Generator to gain a target language output, and another neural Discriminator is used to guide the distinction between outputs of speech translation model and true target monolingual sentences. Experimental results on the CCMT 2019-BSTC dataset speech translation task demonstrate that the proposed methods can significantly improve the performance of the End-to-End speech translation system.
Anthology ID:
2020.autosimtrans-1.2
Volume:
Proceedings of the First Workshop on Automatic Simultaneous Translation
Month:
July
Year:
2020
Address:
Seattle, Washington
Venues:
ACL | AutoSimTrans | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10–14
Language:
URL:
https://www.aclweb.org/anthology/2020.autosimtrans-1.2
DOI:
10.18653/v1/2020.autosimtrans-1.2
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.autosimtrans-1.2.pdf
Video:
 http://slideslive.com/38929918