One Representation per Word - Does it make Sense for Composition?

Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir


Abstract
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.
Anthology ID:
W17-1910
Volume:
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications
Month:
April
Year:
2017
Address:
Valencia, Spain
Venues:
SENSE | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
79–90
Language:
URL:
https://www.aclweb.org/anthology/W17-1910
DOI:
10.18653/v1/W17-1910
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W17-1910.pdf