Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles

Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1156-1167, 2022.

Abstract

Learning complex robot behaviors through interaction requires structured exploration. Planning should target interactions with the potential to optimize long-term performance, while only reducing uncertainty where conducive to this objective. This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards. We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling. The policy is then trained on an upper confidence bound (UCB) objective to identify and select the interactions most promising to improve long-term performance. We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives. In sparse and hard to explore environments we achieve an average improvement of over 30%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-seyde22b, title = {Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles}, author = {Seyde, Tim and Schwarting, Wilko and Karaman, Sertac and Rus, Daniela}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1156--1167}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/seyde22b/seyde22b.pdf}, url = {https://proceedings.mlr.press/v164/seyde22b.html}, abstract = {Learning complex robot behaviors through interaction requires structured exploration. Planning should target interactions with the potential to optimize long-term performance, while only reducing uncertainty where conducive to this objective. This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards. We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling. The policy is then trained on an upper confidence bound (UCB) objective to identify and select the interactions most promising to improve long-term performance. We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives. In sparse and hard to explore environments we achieve an average improvement of over 30%.} }
Endnote
%0 Conference Paper %T Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles %A Tim Seyde %A Wilko Schwarting %A Sertac Karaman %A Daniela Rus %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-seyde22b %I PMLR %P 1156--1167 %U https://proceedings.mlr.press/v164/seyde22b.html %V 164 %X Learning complex robot behaviors through interaction requires structured exploration. Planning should target interactions with the potential to optimize long-term performance, while only reducing uncertainty where conducive to this objective. This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards. We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling. The policy is then trained on an upper confidence bound (UCB) objective to identify and select the interactions most promising to improve long-term performance. We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives. In sparse and hard to explore environments we achieve an average improvement of over 30%.
APA
Seyde, T., Schwarting, W., Karaman, S. & Rus, D.. (2022). Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1156-1167 Available from https://proceedings.mlr.press/v164/seyde22b.html.

Related Material