Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

Victor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-I-Nieto, Jordi Torres
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1317-1327, 2020.

Abstract

Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation – they discover options that provide a poor coverage of the state space. In light of this, we propose Explore, Discover and Learn (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-campos20a, title = {Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills}, author = {Campos, Victor and Trott, Alexander and Xiong, Caiming and Socher, Richard and Giro-I-Nieto, Xavier and Torres, Jordi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1317--1327}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/campos20a/campos20a.pdf}, url = {https://proceedings.mlr.press/v119/campos20a.html}, abstract = {Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation – they discover options that provide a poor coverage of the state space. In light of this, we propose Explore, Discover and Learn (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.} }
Endnote
%0 Conference Paper %T Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills %A Victor Campos %A Alexander Trott %A Caiming Xiong %A Richard Socher %A Xavier Giro-I-Nieto %A Jordi Torres %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-campos20a %I PMLR %P 1317--1327 %U https://proceedings.mlr.press/v119/campos20a.html %V 119 %X Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation – they discover options that provide a poor coverage of the state space. In light of this, we propose Explore, Discover and Learn (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.
APA
Campos, V., Trott, A., Xiong, C., Socher, R., Giro-I-Nieto, X. & Torres, J.. (2020). Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1317-1327 Available from https://proceedings.mlr.press/v119/campos20a.html.

Related Material