Learning Neural Sequence-to-Sequence Models from Weak Feedback with Bipolar Ramp Loss

Laura Jehl, Carolin Lawrence, Stefan Riezler


Abstract
In many machine learning scenarios, supervision by gold labels is not available and conse quently neural models cannot be trained directly by maximum likelihood estimation. In a weak supervision scenario, metric-augmented objectives can be employed to assign feedback to model outputs, which can be used to extract a supervision signal for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that objectives should actively discourage negative outputs in addition to promoting a surrogate gold structure. This notion of bipolarity is naturally present in ramp loss objectives, which we adapt to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks.
Anthology ID:
Q19-1015
Volume:
Transactions of the Association for Computational Linguistics, Volume 7
Month:
March
Year:
2019
Address:
Venue:
TACL
SIG:
Publisher:
Note:
Pages:
233–248
Language:
URL:
https://www.aclweb.org/anthology/Q19-1015
DOI:
10.1162/tacl_a_00265
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/Q19-1015.pdf
Video:
 https://vimeo.com/383999550