Auxiliary Tasks Speed Up Learning Point Goal Navigation

Joel Ye, Dhruv Batra, Erik Wijmans, Abhishek Das
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:498-516, 2021.

Abstract

PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment. Wijmans et al. showed that this task is solvable in simulation but their method is computationally prohibitive - requiring 2.5 billion frames of experience and 180 GPU-days. We develop a method to significantly improve sample efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory, etc.). We find that naively combining multiple auxiliary tasks improves sample efficiency, but only provides marginal gains beyond a point. To overcome this, we use attention to combine representations from individual auxiliary tasks. Our best agent is 5.5x faster to match the performance of the previous state-of-the-art, DD-PPO, at 40M frames, and improves on DD-PPO’s performance at 40M frames by 0.16 SPL. Our code is publicly available at github.com/joel99/habitat-pointnav-aux.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-ye21a, title = {Auxiliary Tasks Speed Up Learning Point Goal Navigation}, author = {Ye, Joel and Batra, Dhruv and Wijmans, Erik and Das, Abhishek}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {498--516}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/ye21a/ye21a.pdf}, url = {https://proceedings.mlr.press/v155/ye21a.html}, abstract = {PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment. Wijmans et al. showed that this task is solvable in simulation but their method is computationally prohibitive - requiring 2.5 billion frames of experience and 180 GPU-days. We develop a method to significantly improve sample efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory, etc.). We find that naively combining multiple auxiliary tasks improves sample efficiency, but only provides marginal gains beyond a point. To overcome this, we use attention to combine representations from individual auxiliary tasks. Our best agent is 5.5x faster to match the performance of the previous state-of-the-art, DD-PPO, at 40M frames, and improves on DD-PPO’s performance at 40M frames by 0.16 SPL. Our code is publicly available at github.com/joel99/habitat-pointnav-aux.} }
Endnote
%0 Conference Paper %T Auxiliary Tasks Speed Up Learning Point Goal Navigation %A Joel Ye %A Dhruv Batra %A Erik Wijmans %A Abhishek Das %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-ye21a %I PMLR %P 498--516 %U https://proceedings.mlr.press/v155/ye21a.html %V 155 %X PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment. Wijmans et al. showed that this task is solvable in simulation but their method is computationally prohibitive - requiring 2.5 billion frames of experience and 180 GPU-days. We develop a method to significantly improve sample efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory, etc.). We find that naively combining multiple auxiliary tasks improves sample efficiency, but only provides marginal gains beyond a point. To overcome this, we use attention to combine representations from individual auxiliary tasks. Our best agent is 5.5x faster to match the performance of the previous state-of-the-art, DD-PPO, at 40M frames, and improves on DD-PPO’s performance at 40M frames by 0.16 SPL. Our code is publicly available at github.com/joel99/habitat-pointnav-aux.
APA
Ye, J., Batra, D., Wijmans, E. & Das, A.. (2021). Auxiliary Tasks Speed Up Learning Point Goal Navigation. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:498-516 Available from https://proceedings.mlr.press/v155/ye21a.html.

Related Material