Neural Speech Translation using Lattice Transformations and Graph Networks

Daniel Beck, Trevor Cohn, Gholamreza Haffari


Abstract
Speech translation systems usually follow a pipeline approach, using word lattices as an intermediate representation. However, previous work assume access to the original transcriptions used to train the ASR system, which can limit applicability in real scenarios. In this work we propose an approach for speech translation through lattice transformations and neural models based on graph networks. Experimental results show that our approach reaches competitive performance without relying on transcriptions, while also being orders of magnitude faster than previous work.
Anthology ID:
D19-5304
Volume:
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)
Month:
November
Year:
2019
Address:
Hong Kong
Venues:
EMNLP | TextGraphs | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–31
Language:
URL:
https://www.aclweb.org/anthology/D19-5304
DOI:
10.18653/v1/D19-5304
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/D19-5304.pdf
Attachment:
 D19-5304.Attachment.pdf