Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models

Simon Dobnik, Mehdi Ghanimifard, John Kelleher


Abstract
The challenge for computational models of spatial descriptions for situated dialogue systems is the integration of information from different modalities. The semantics of spatial descriptions are grounded in at least two sources of information: (i) a geometric representation of space and (ii) the functional interaction of related objects that. We train several neural language models on descriptions of scenes from a dataset of image captions and examine whether the functional or geometric bias of spatial descriptions reported in the literature is reflected in the estimated perplexity of these models. The results of these experiments have implications for the creation of models of spatial lexical semantics for human-robot dialogue systems. Furthermore, they also provide an insight into the kinds of the semantic knowledge captured by neural language models trained on spatial descriptions, which has implications for image captioning systems.
Anthology ID:
W18-1401
Volume:
Proceedings of the First International Workshop on Spatial Language Understanding
Month:
June
Year:
2018
Address:
New Orleans
Venues:
NAACL | SpLU | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://www.aclweb.org/anthology/W18-1401
DOI:
10.18653/v1/W18-1401
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W18-1401.pdf