Self-Training for Unsupervised Parsing with PRPN

Anhad Mohananey, Katharina Kann, Samuel R. Bowman


Abstract
Neural unsupervised parsing (UP) models learn to parse without access to syntactic annotations, while being optimized for another task like language modeling. In this work, we propose self-training for neural UP models: we leverage aggregated annotations predicted by copies of our model as supervision for future copies. To be able to use our model’s predictions during training, we extend a recent neural UP architecture, the PRPN (Shen et al., 2018a), such that it can be trained in a semi-supervised fashion. We then add examples with parses predicted by our model to our unlabeled UP training data. Our self-trained model outperforms the PRPN by 8.1% F1 and the previous state of the art by 1.6% F1. In addition, we show that our architecture can also be helpful for semi-supervised parsing in ultra-low-resource settings.
Anthology ID:
2020.iwpt-1.11
Volume:
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | IWPT | WS
SIG:
SIGPARSE
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–110
Language:
URL:
https://www.aclweb.org/anthology/2020.iwpt-1.11
DOI:
10.18653/v1/2020.iwpt-1.11
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.iwpt-1.11.pdf
Video:
 http://slideslive.com/38929678