A critical survey on measuring success in rank-based keyword assignment to documents

Natalie Schluter


Abstract
Evaluation approaches for unsupervised rank-based keyword assignment are nearly as numerous as are the existing systems. The prolific production of each newly used metric (or metric twist) seems to stem from general dis-satisfaction with the previous one and the source of that dissatisfaction has not previously been discussed in the literature. The difficulty may stem from a poor specification of the keyword assignment task in view of the rank-based approach. With a more complete specification of this task, we aim to show why the previous evaluation metrics fail to satisfy researchers’ goals to distinguish and detect good rank-based keyword assignment systems. We put forward a characterisation of an ideal evaluation metric, and discuss the consistency of the evaluation metrics with this ideal, finding that the average standard normalised cumulative gain metric is most consistent with this ideal.
Anthology ID:
2015.jeptalnrecital-court.9
Volume:
Actes de la 22e conférence sur le Traitement Automatique des Langues Naturelles. Articles courts
Month:
June
Year:
2015
Address:
Caen, France
Venue:
JEP/TALN/RECITAL
SIG:
Publisher:
ATALA
Note:
Pages:
55–60
Language:
URL:
https://www.aclweb.org/anthology/2015.jeptalnrecital-court.9
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2015.jeptalnrecital-court.9.pdf