Learning Behaviors through Physics-driven Latent Imagination

Antoine Richard, Stéphanie Aravecchia, Matthieu Geist, Cédric Pradalier
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1190-1199, 2022.

Abstract

Model-based reinforcement learning (MBRL) consists in learning a so-called world model, a representation of the environment through interactions with it, then use it to train an agent. This approach is particularly interesting in the con-text of field robotics, as it alleviates the need to train online, and reduces the risks inherent to directly training agents on real robots. Generally, in such approaches, the world encompasses both the part related to the robot itself and the rest of the environment. We argue that decoupling the environment representation (for example, images or laser scans) from the dynamics of the physical system (that is, the robot and its physical state) can increase the flexibility of world models and open doors to greater robustness. In this paper, we apply this concept to a strong latent-agent, Dreamer. We then showcase the increased flexibility by transferring the environment part of the world model from one robot (a boat) to another (a rover), simply by adapting the physical model in the imagination. We additionally demonstrate the robustness of our method through real-world experiments on a boat.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-richard22a, title = {Learning Behaviors through Physics-driven Latent Imagination}, author = {Richard, Antoine and Aravecchia, St\'ephanie and Geist, Matthieu and Pradalier, C\'edric}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1190--1199}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/richard22a/richard22a.pdf}, url = {https://proceedings.mlr.press/v164/richard22a.html}, abstract = {Model-based reinforcement learning (MBRL) consists in learning a so-called world model, a representation of the environment through interactions with it, then use it to train an agent. This approach is particularly interesting in the con-text of field robotics, as it alleviates the need to train online, and reduces the risks inherent to directly training agents on real robots. Generally, in such approaches, the world encompasses both the part related to the robot itself and the rest of the environment. We argue that decoupling the environment representation (for example, images or laser scans) from the dynamics of the physical system (that is, the robot and its physical state) can increase the flexibility of world models and open doors to greater robustness. In this paper, we apply this concept to a strong latent-agent, Dreamer. We then showcase the increased flexibility by transferring the environment part of the world model from one robot (a boat) to another (a rover), simply by adapting the physical model in the imagination. We additionally demonstrate the robustness of our method through real-world experiments on a boat.} }
Endnote
%0 Conference Paper %T Learning Behaviors through Physics-driven Latent Imagination %A Antoine Richard %A Stéphanie Aravecchia %A Matthieu Geist %A Cédric Pradalier %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-richard22a %I PMLR %P 1190--1199 %U https://proceedings.mlr.press/v164/richard22a.html %V 164 %X Model-based reinforcement learning (MBRL) consists in learning a so-called world model, a representation of the environment through interactions with it, then use it to train an agent. This approach is particularly interesting in the con-text of field robotics, as it alleviates the need to train online, and reduces the risks inherent to directly training agents on real robots. Generally, in such approaches, the world encompasses both the part related to the robot itself and the rest of the environment. We argue that decoupling the environment representation (for example, images or laser scans) from the dynamics of the physical system (that is, the robot and its physical state) can increase the flexibility of world models and open doors to greater robustness. In this paper, we apply this concept to a strong latent-agent, Dreamer. We then showcase the increased flexibility by transferring the environment part of the world model from one robot (a boat) to another (a rover), simply by adapting the physical model in the imagination. We additionally demonstrate the robustness of our method through real-world experiments on a boat.
APA
Richard, A., Aravecchia, S., Geist, M. & Pradalier, C.. (2022). Learning Behaviors through Physics-driven Latent Imagination. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1190-1199 Available from https://proceedings.mlr.press/v164/richard22a.html.

Related Material