Sim-to-Real Transfer with Neural-Augmented Robot Simulation

Florian Golemo, Adrien Ali Taiga, Aaron Courville, Pierre-Yves Oudeyer
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:817-828, 2018.

Abstract

Despite the recent successes of deep reinforcement learning, teaching complex motor skills to a physical robot remains a hard problem. While learning directly on a real system is usually impractical, doing so in simulation has proven to be fast and safe. Nevertheless, because of the "reality gap," policies trained in simulation often perform poorly when deployed on a real system. In this work, we introduce a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator. This Neural-Augmented Simulation (NAS) can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators. We demonstrate the potential of our approach through a set of experiments on the Mujoco simulator with added backlash and the Poppy Ergo Jr robot. NAS allows us to learn policies that are competitive with ones that would have been learned directly on the real robot.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-golemo18a, title = {Sim-to-Real Transfer with Neural-Augmented Robot Simulation}, author = {Golemo, Florian and Taiga, Adrien Ali and Courville, Aaron and Oudeyer, Pierre-Yves}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {817--828}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/golemo18a/golemo18a.pdf}, url = {https://proceedings.mlr.press/v87/golemo18a.html}, abstract = {Despite the recent successes of deep reinforcement learning, teaching complex motor skills to a physical robot remains a hard problem. While learning directly on a real system is usually impractical, doing so in simulation has proven to be fast and safe. Nevertheless, because of the "reality gap," policies trained in simulation often perform poorly when deployed on a real system. In this work, we introduce a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator. This Neural-Augmented Simulation (NAS) can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators. We demonstrate the potential of our approach through a set of experiments on the Mujoco simulator with added backlash and the Poppy Ergo Jr robot. NAS allows us to learn policies that are competitive with ones that would have been learned directly on the real robot.} }
Endnote
%0 Conference Paper %T Sim-to-Real Transfer with Neural-Augmented Robot Simulation %A Florian Golemo %A Adrien Ali Taiga %A Aaron Courville %A Pierre-Yves Oudeyer %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-golemo18a %I PMLR %P 817--828 %U https://proceedings.mlr.press/v87/golemo18a.html %V 87 %X Despite the recent successes of deep reinforcement learning, teaching complex motor skills to a physical robot remains a hard problem. While learning directly on a real system is usually impractical, doing so in simulation has proven to be fast and safe. Nevertheless, because of the "reality gap," policies trained in simulation often perform poorly when deployed on a real system. In this work, we introduce a method for training a recurrent neural network on the differences between simulated and real robot trajectories and then using this model to augment the simulator. This Neural-Augmented Simulation (NAS) can be used to learn control policies that transfer significantly better to real environments than policies learned on existing simulators. We demonstrate the potential of our approach through a set of experiments on the Mujoco simulator with added backlash and the Poppy Ergo Jr robot. NAS allows us to learn policies that are competitive with ones that would have been learned directly on the real robot.
APA
Golemo, F., Taiga, A.A., Courville, A. & Oudeyer, P.. (2018). Sim-to-Real Transfer with Neural-Augmented Robot Simulation. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:817-828 Available from https://proceedings.mlr.press/v87/golemo18a.html.

Related Material