Rapid Exploration for Open-World Navigation with Latent Goal Models

Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine
Proceedings of the 5th Conference on Robot Learning, PMLR 164:674-684, 2022.

Abstract

We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments. At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images. We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration. Trained on a large offline dataset of prior experience, the model acquires a representation of visual goals that is robust to task-irrelevant distractors. We demonstrate our method on a mobile ground robot in open-world exploration scenarios. Given an image of a goal that is up to 80 meters away, our method leverages its representation to explore and discover the goal in under 20 minutes, even amidst previously-unseen obstacles and weather conditions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-shah22a, title = {Rapid Exploration for Open-World Navigation with Latent Goal Models}, author = {Shah, Dhruv and Eysenbach, Benjamin and Rhinehart, Nicholas and Levine, Sergey}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {674--684}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/shah22a/shah22a.pdf}, url = {https://proceedings.mlr.press/v164/shah22a.html}, abstract = {We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments. At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images. We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration. Trained on a large offline dataset of prior experience, the model acquires a representation of visual goals that is robust to task-irrelevant distractors. We demonstrate our method on a mobile ground robot in open-world exploration scenarios. Given an image of a goal that is up to 80 meters away, our method leverages its representation to explore and discover the goal in under 20 minutes, even amidst previously-unseen obstacles and weather conditions.} }
Endnote
%0 Conference Paper %T Rapid Exploration for Open-World Navigation with Latent Goal Models %A Dhruv Shah %A Benjamin Eysenbach %A Nicholas Rhinehart %A Sergey Levine %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-shah22a %I PMLR %P 674--684 %U https://proceedings.mlr.press/v164/shah22a.html %V 164 %X We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments. At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images. We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration. Trained on a large offline dataset of prior experience, the model acquires a representation of visual goals that is robust to task-irrelevant distractors. We demonstrate our method on a mobile ground robot in open-world exploration scenarios. Given an image of a goal that is up to 80 meters away, our method leverages its representation to explore and discover the goal in under 20 minutes, even amidst previously-unseen obstacles and weather conditions.
APA
Shah, D., Eysenbach, B., Rhinehart, N. & Levine, S.. (2022). Rapid Exploration for Open-World Navigation with Latent Goal Models. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:674-684 Available from https://proceedings.mlr.press/v164/shah22a.html.

Related Material