Investigating the Stability of Concrete Nouns in Word Embeddings

Bénédicte Pierrejean, Ludovic Tanguy


Abstract
We know that word embeddings trained using neural-based methods (such as word2vec SGNS) are sensitive to stability problems and that across two models trained using the exact same set of parameters, the nearest neighbors of a word are likely to change. All words are not equally impacted by this internal instability and recent studies have investigated features influencing the stability of word embeddings. This stability can be seen as a clue for the reliability of the semantic representation of a word. In this work, we investigate the influence of the degree of concreteness of nouns on the stability of their semantic representation. We show that for English generic corpora, abstract words are more affected by stability problems than concrete words. We also found that to a certain extent, the difference between the degree of concreteness of a noun and its nearest neighbors can partly explain the stability or instability of its neighbors.
Anthology ID:
W19-0510
Volume:
Proceedings of the 13th International Conference on Computational Semantics - Short Papers
Month:
May
Year:
2019
Address:
Gothenburg, Sweden
Venues:
IWCS | WS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–70
Language:
URL:
https://www.aclweb.org/anthology/W19-0510
DOI:
10.18653/v1/W19-0510
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W19-0510.pdf