Fast Adaptation to New Environments via Policy-Dynamics Value Functions

Roberta Raileanu, Max Goldstein, Arthur Szlam, Rob Fergus
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7920-7931, 2020.

Abstract

Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PD-VF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-raileanu20a, title = {Fast Adaptation to New Environments via Policy-Dynamics Value Functions}, author = {Raileanu, Roberta and Goldstein, Max and Szlam, Arthur and Fergus, Rob}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7920--7931}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/raileanu20a/raileanu20a.pdf}, url = {https://proceedings.mlr.press/v119/raileanu20a.html}, abstract = {Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PD-VF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains.} }
Endnote
%0 Conference Paper %T Fast Adaptation to New Environments via Policy-Dynamics Value Functions %A Roberta Raileanu %A Max Goldstein %A Arthur Szlam %A Rob Fergus %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-raileanu20a %I PMLR %P 7920--7931 %U https://proceedings.mlr.press/v119/raileanu20a.html %V 119 %X Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PD-VF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains.
APA
Raileanu, R., Goldstein, M., Szlam, A. & Fergus, R.. (2020). Fast Adaptation to New Environments via Policy-Dynamics Value Functions. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7920-7931 Available from https://proceedings.mlr.press/v119/raileanu20a.html.

Related Material