Neural Lexicons for Slot Tagging in Spoken Language Understanding

Kyle Williams


Abstract
We explore the use of lexicons or gazettes in neural models for slot tagging in spoken language understanding. We develop models that encode lexicon information as neural features for use in a Long-short term memory neural network. Experiments are performed on data from 4 domains from an intelligent assistant under conditions that often occur in an industry setting, where there may be: 1) large amounts of training data, 2) limited amounts of training data for new domains, and 3) cross domain training. Results show that the use of neural lexicon information leads to a significant improvement in slot tagging, with improvements in the F-score of up to 12%. Our findings have implications for how lexicons can be used to improve the performance of neural slot tagging models.
Anthology ID:
N19-2011
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
83–89
Language:
URL:
https://www.aclweb.org/anthology/N19-2011
DOI:
10.18653/v1/N19-2011
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/N19-2011.pdf