Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View

Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, Piotr Mirowski


Abstract
The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in (Chen et al., 2019) and show that the panoramas we have added to StreetLearn support both Touchdown tasks and can be used effectively for further research and comparison.
Anthology ID:
2020.splu-1.7
Volume:
Proceedings of the Third International Workshop on Spatial Language Understanding
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | SpLU
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
56–62
Language:
URL:
https://www.aclweb.org/anthology/2020.splu-1.7
DOI:
10.18653/v1/2020.splu-1.7
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.splu-1.7.pdf