Subtitles to Segmentation: Improving Low-Resource Speech-to-TextTranslation Pipelines

David Wan, Zhengping Jiang, Chris Kedzie, Elsbeth Turcan, Peter Bell, Kathy McKeown


Abstract
In this work, we focus on improving ASR output segmentation in the context of low-resource language speech-to-text translation. ASR output segmentation is crucial, as ASR systems segment the input audio using purely acoustic information and are not guaranteed to output sentence-like segments. Since most MT systems expect sentences as input, feeding in longer unsegmented passages can lead to sub-optimal performance. We explore the feasibility of using datasets of subtitles from TV shows and movies to train better ASR segmentation models. We further incorporate part-of-speech (POS) tag and dependency label information (derived from the unsegmented ASR outputs) into our segmentation model. We show that this noisy syntactic information can improve model accuracy. We evaluate our models intrinsically on segmentation quality and extrinsically on downstream MT performance, as well as downstream tasks including cross-lingual information retrieval (CLIR) tasks and human relevance assessments. Our model shows improved performance on downstream tasks for Lithuanian and Bulgarian.
Anthology ID:
2020.clssts-1.11
Volume:
Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020)
Month:
May
Year:
2020
Address:
Marseille, France
Venues:
CLSSTS | LREC | WS
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
68–73
Language:
English
URL:
https://www.aclweb.org/anthology/2020.clssts-1.11
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.clssts-1.11.pdf