SemEval-2019 Task 10: Math Question Answering

Mark Hopkins, Ronan Le Bras, Cristian Petrescu-Prahova, Gabriel Stanovsky, Hannaneh Hajishirzi, Rik Koncel-Kedziorski


Abstract
We report on the SemEval 2019 task on math question answering. We provided a question set derived from Math SAT practice exams, including 2778 training questions and 1082 test questions. For a significant subset of these questions, we also provided SMT-LIB logical form annotations and an interpreter that could solve these logical forms. Systems were evaluated based on the percentage of correctly answered questions. The top system correctly answered 45% of the test questions, a considerable improvement over the 17% random guessing baseline.
Anthology ID:
S19-2153
Volume:
Proceedings of the 13th International Workshop on Semantic Evaluation
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota, USA
Venue:
*SEMEVAL
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
893–899
Language:
URL:
https://www.aclweb.org/anthology/S19-2153
DOI:
10.18653/v1/S19-2153
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/S19-2153.pdf