APS: Active Pretraining with Successor Features

Hao Liu, Pieter Abbeel
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6736-6747, 2021.

Abstract

We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features \citep{Hansen2020Fast} with nonparametric entropy maximization \citep{liu2021behavior}, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21b, title = {APS: Active Pretraining with Successor Features}, author = {Liu, Hao and Abbeel, Pieter}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6736--6747}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21b/liu21b.pdf}, url = {https://proceedings.mlr.press/v139/liu21b.html}, abstract = {We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features \citep{Hansen2020Fast} with nonparametric entropy maximization \citep{liu2021behavior}, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.} }
Endnote
%0 Conference Paper %T APS: Active Pretraining with Successor Features %A Hao Liu %A Pieter Abbeel %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21b %I PMLR %P 6736--6747 %U https://proceedings.mlr.press/v139/liu21b.html %V 139 %X We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features \citep{Hansen2020Fast} with nonparametric entropy maximization \citep{liu2021behavior}, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.
APA
Liu, H. & Abbeel, P.. (2021). APS: Active Pretraining with Successor Features. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6736-6747 Available from https://proceedings.mlr.press/v139/liu21b.html.

Related Material