Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning

Anoopkumar Sonar, Vincent Pacelli, Anirudha Majumdar
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:21-33, 2021.

Abstract

A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-sonar21a, title = {Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning}, author = {Sonar, Anoopkumar and Pacelli, Vincent and Majumdar, Anirudha}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {21--33}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/sonar21a/sonar21a.pdf}, url = {https://proceedings.mlr.press/v144/sonar21a.html}, abstract = {A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties. } }
Endnote
%0 Conference Paper %T Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning %A Anoopkumar Sonar %A Vincent Pacelli %A Anirudha Majumdar %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-sonar21a %I PMLR %P 21--33 %U https://proceedings.mlr.press/v144/sonar21a.html %V 144 %X A fundamental challenge in reinforcement learning is to learn policies that generalize beyond the operating domains experienced during training. In this paper, we approach this challenge through the following invariance principle: an agent must find a representation such that there exists an action-predictor built on top of this representation that is simultaneously optimal across all training domains. Intuitively, the resulting invariant policy enhances generalization by finding causes of successful actions. We propose a novel learning algorithm, Invariant Policy Optimization (IPO), that implements this principle and learns an invariant policy during training. We compare our approach with standard policy gradient methods and demonstrate significant improvements in generalization performance on unseen domains for linear quadratic regulator and grid-world problems, and an example where a robot must learn to open doors with varying physical properties.
APA
Sonar, A., Pacelli, V. & Majumdar, A.. (2021). Invariant Policy Optimization: Towards Stronger Generalization in Reinforcement Learning. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:21-33 Available from https://proceedings.mlr.press/v144/sonar21a.html.

Related Material