Character-Word LSTM Language Models
Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq
Abstract
We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model. Character information can reveal structural (dis)similarities between words and can even be used when a word is out-of-vocabulary, thus improving the modeling of infrequent and unknown words. By concatenating word and character embeddings, we achieve up to 2.77% relative improvement on English compared to a baseline model with a similar amount of parameters and 4.57% on Dutch. Moreover, we also outperform baseline word-level models with a larger number of parameters.- Anthology ID:
- E17-1040
- Volume:
- Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
- Month:
- April
- Year:
- 2017
- Address:
- Valencia, Spain
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 417–427
- Language:
- URL:
- https://www.aclweb.org/anthology/E17-1040
- DOI:
- PDF:
- http://aclanthology.lst.uni-saarland.de/E17-1040.pdf