Bayesian Nonparametrics for Offline Skill Discovery

Valentin Villecroze, Harry Braviner, Panteha Naderian, Chris Maddison, Gabriel Loaiza-Ganem
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:22284-22299, 2022.

Abstract

Skills or low-level policies in reinforcement learning are temporally extended actions that can speed up learning and enable complex behaviours. Recent work in offline reinforcement learning and imitation learning has proposed several techniques for skill discovery from a set of expert trajectories. While these methods are promising, the number K of skills to discover is always a fixed hyperparameter, which requires either prior knowledge about the environment or an additional parameter search to tune it. We first propose a method for offline learning of options (a particular skill framework) exploiting advances in variational inference and continuous relaxations. We then highlight an unexplored connection between Bayesian nonparametrics and offline skill discovery, and show how to obtain a nonparametric version of our model. This version is tractable thanks to a carefully structured approximate posterior with a dynamically-changing number of options, removing the need to specify K. We also show how our nonparametric extension can be applied in other skill frameworks, and empirically demonstrate that our method can outperform state-of-the-art offline skill learning algorithms across a variety of environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-villecroze22a, title = {{B}ayesian Nonparametrics for Offline Skill Discovery}, author = {Villecroze, Valentin and Braviner, Harry and Naderian, Panteha and Maddison, Chris and Loaiza-Ganem, Gabriel}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {22284--22299}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/villecroze22a/villecroze22a.pdf}, url = {https://proceedings.mlr.press/v162/villecroze22a.html}, abstract = {Skills or low-level policies in reinforcement learning are temporally extended actions that can speed up learning and enable complex behaviours. Recent work in offline reinforcement learning and imitation learning has proposed several techniques for skill discovery from a set of expert trajectories. While these methods are promising, the number K of skills to discover is always a fixed hyperparameter, which requires either prior knowledge about the environment or an additional parameter search to tune it. We first propose a method for offline learning of options (a particular skill framework) exploiting advances in variational inference and continuous relaxations. We then highlight an unexplored connection between Bayesian nonparametrics and offline skill discovery, and show how to obtain a nonparametric version of our model. This version is tractable thanks to a carefully structured approximate posterior with a dynamically-changing number of options, removing the need to specify K. We also show how our nonparametric extension can be applied in other skill frameworks, and empirically demonstrate that our method can outperform state-of-the-art offline skill learning algorithms across a variety of environments.} }
Endnote
%0 Conference Paper %T Bayesian Nonparametrics for Offline Skill Discovery %A Valentin Villecroze %A Harry Braviner %A Panteha Naderian %A Chris Maddison %A Gabriel Loaiza-Ganem %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-villecroze22a %I PMLR %P 22284--22299 %U https://proceedings.mlr.press/v162/villecroze22a.html %V 162 %X Skills or low-level policies in reinforcement learning are temporally extended actions that can speed up learning and enable complex behaviours. Recent work in offline reinforcement learning and imitation learning has proposed several techniques for skill discovery from a set of expert trajectories. While these methods are promising, the number K of skills to discover is always a fixed hyperparameter, which requires either prior knowledge about the environment or an additional parameter search to tune it. We first propose a method for offline learning of options (a particular skill framework) exploiting advances in variational inference and continuous relaxations. We then highlight an unexplored connection between Bayesian nonparametrics and offline skill discovery, and show how to obtain a nonparametric version of our model. This version is tractable thanks to a carefully structured approximate posterior with a dynamically-changing number of options, removing the need to specify K. We also show how our nonparametric extension can be applied in other skill frameworks, and empirically demonstrate that our method can outperform state-of-the-art offline skill learning algorithms across a variety of environments.
APA
Villecroze, V., Braviner, H., Naderian, P., Maddison, C. & Loaiza-Ganem, G.. (2022). Bayesian Nonparametrics for Offline Skill Discovery. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:22284-22299 Available from https://proceedings.mlr.press/v162/villecroze22a.html.

Related Material