Vision-and-Dialog Navigation

Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer
Proceedings of the Conference on Robot Learning, PMLR 100:394-406, 2020.

Abstract

Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-thomason20a, title = {Vision-and-Dialog Navigation}, author = {Thomason, Jesse and Murray, Michael and Cakmak, Maya and Zettlemoyer, Luke}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {394--406}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/thomason20a/thomason20a.pdf}, url = {https://proceedings.mlr.press/v100/thomason20a.html}, abstract = {Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/} }
Endnote
%0 Conference Paper %T Vision-and-Dialog Navigation %A Jesse Thomason %A Michael Murray %A Maya Cakmak %A Luke Zettlemoyer %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-thomason20a %I PMLR %P 394--406 %U https://proceedings.mlr.press/v100/thomason20a.html %V 100 %X Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/
APA
Thomason, J., Murray, M., Cakmak, M. & Zettlemoyer, L.. (2020). Vision-and-Dialog Navigation. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:394-406 Available from https://proceedings.mlr.press/v100/thomason20a.html.

Related Material