CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models

Abhinav Singh, Patrick Xia, Guanghui Qin, Mahsa Yarmohammadi, Benjamin Van Durme


Abstract
Copy mechanisms are employed in sequence to sequence (seq2seq) models to generate reproductions of words from the input to the output. These frameworks, operating at the lexical type level, fail to provide an explicit alignment that records where each token was copied from. Further, they require contiguous token sequences from the input (spans) to be copied individually. We present a model with an explicit token-level copy operation and extend it to copying entire spans. Our model provides hard alignments between spans in the input and output, allowing for nontraditional applications of seq2seq, like information extraction. We demonstrate the approach on Nested Named Entity Recognition, achieving near state-of-the-art accuracy with an order of magnitude increase in decoding speed.
Anthology ID:
2020.spnlp-1.2
Volume:
Proceedings of the Fourth Workshop on Structured Prediction for NLP
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–16
Language:
URL:
https://www.aclweb.org/anthology/2020.spnlp-1.2
DOI:
10.18653/v1/2020.spnlp-1.2
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.spnlp-1.2.pdf
Optional supplementary material:
 2020.spnlp-1.2.OptionalSupplementaryMaterial.pdf