Seq2Seq Models with Dropout can Learn Generalizable Reduplication
Brandon Prickett, Aaron Traylor, Joe Pater
Abstract
Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.- Anthology ID:
- W18-5810
- Volume:
- Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology
- Month:
- October
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venues:
- EMNLP | WS
- SIG:
- SIGMORPHON
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 93–100
- Language:
- URL:
- https://www.aclweb.org/anthology/W18-5810
- DOI:
- 10.18653/v1/W18-5810
- PDF:
- http://aclanthology.lst.uni-saarland.de/W18-5810.pdf