Multi-modal Discriminative Model for Vision-and-Language Navigation

Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie


Abstract
Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals. Successful agents must have the ability to parse natural language of varying linguistic styles, ground them in potentially unfamiliar scenes, plan and react with ambiguous environmental feedback. Generalization ability is limited by the amount of human annotated data. In particular, paired vision-language sequence data is expensive to collect. We develop a discriminator that evaluates how well an instruction explains a given path in VLN task using multi-modal alignment. Our study reveals that only a small fraction of the high-quality augmented data from Fried et al., as scored by our discriminator, is useful for training VLN agents with similar performance. We also show that a VLN agent warm-started with pre-trained components from the discriminator outperforms the benchmark success rates of 35.5 by 10% relative measure.
Anthology ID:
W19-1605
Volume:
Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venues:
NAACL | RoboNLP | SpLU | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–49
Language:
URL:
https://www.aclweb.org/anthology/W19-1605
DOI:
10.18653/v1/W19-1605
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W19-1605.pdf