Curiosity Driven Exploration of Learned Disentangled Goal Spaces

Adrien Laversanne-Finot, Alexandre Pere, Pierre-Yves Oudeyer
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:487-504, 2018.

Abstract

Intrinsically motivated goal exploration processes enable agents to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. In this paper we show that using a disentangled goal space (i.e. a representation where each latent variable is sensitive to a single degree of freedom) leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-laversanne-finot18a, title = {Curiosity Driven Exploration of Learned Disentangled Goal Spaces}, author = {Laversanne-Finot, Adrien and Pere, Alexandre and Oudeyer, Pierre-Yves}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {487--504}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/laversanne-finot18a/laversanne-finot18a.pdf}, url = {https://proceedings.mlr.press/v87/laversanne-finot18a.html}, abstract = {Intrinsically motivated goal exploration processes enable agents to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. In this paper we show that using a disentangled goal space (i.e. a representation where each latent variable is sensitive to a single degree of freedom) leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment. } }
Endnote
%0 Conference Paper %T Curiosity Driven Exploration of Learned Disentangled Goal Spaces %A Adrien Laversanne-Finot %A Alexandre Pere %A Pierre-Yves Oudeyer %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-laversanne-finot18a %I PMLR %P 487--504 %U https://proceedings.mlr.press/v87/laversanne-finot18a.html %V 87 %X Intrinsically motivated goal exploration processes enable agents to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. In this paper we show that using a disentangled goal space (i.e. a representation where each latent variable is sensitive to a single degree of freedom) leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.
APA
Laversanne-Finot, A., Pere, A. & Oudeyer, P.. (2018). Curiosity Driven Exploration of Learned Disentangled Goal Spaces. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:487-504 Available from https://proceedings.mlr.press/v87/laversanne-finot18a.html.

Related Material