Evaluating Natural Language Understanding Services for Conversational Question Answering Systems

Daniel Braun, Adrian Hernandez Mendez, Florian Matthes, Manfred Langen


Abstract
Conversational interfaces recently gained a lot of attention. One of the reasons for the current hype is the fact that chatbots (one particularly popular form of conversational interfaces) nowadays can be created without any programming knowledge, thanks to different toolkits and so-called Natural Language Understanding (NLU) services. While these NLU services are already widely used in both, industry and science, so far, they have not been analysed systematically. In this paper, we present a method to evaluate the classification performance of NLU services. Moreover, we present two new corpora, one consisting of annotated questions and one consisting of annotated questions with the corresponding answers. Based on these corpora, we conduct an evaluation of some of the most popular NLU services. Thereby we want to enable both, researchers and companies to make more educated decisions about which service they should use.
Anthology ID:
W17-5522
Volume:
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Month:
August
Year:
2017
Address:
Saarbrücken, Germany
Venues:
SIGDIAL | WS
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
174–185
Language:
URL:
https://www.aclweb.org/anthology/W17-5522
DOI:
10.18653/v1/W17-5522
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W17-5522.pdf