Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies

Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, Peter J Ramadge
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11795-11807, 2021.

Abstract

We consider the problem of reinforcement learning when provided with (1) a baseline control policy and (2) a set of constraints that the learner must satisfy. The baseline policy can arise from demonstration data or a teacher agent and may provide useful cues for learning, but it might also be sub-optimal for the task at hand, and is not guaranteed to satisfy the specified constraints, which might encode safety, fairness or other application-specific requirements. In order to safely learn from baseline policies, we propose an iterative policy optimization algorithm that alternates between maximizing expected return on the task, minimizing distance to the baseline policy, and projecting the policy onto the constraint-satisfying set. We analyze our algorithm theoretically and provide a finite-time convergence guarantee. In our experiments on five different control tasks, our algorithm consistently outperforms several state-of-the-art baselines, achieving 10 times fewer constraint violations and 40% higher reward on average.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-yang21i, title = {Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies}, author = {Yang, Tsung-Yen and Rosca, Justinian and Narasimhan, Karthik and Ramadge, Peter J}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11795--11807}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/yang21i/yang21i.pdf}, url = {https://proceedings.mlr.press/v139/yang21i.html}, abstract = {We consider the problem of reinforcement learning when provided with (1) a baseline control policy and (2) a set of constraints that the learner must satisfy. The baseline policy can arise from demonstration data or a teacher agent and may provide useful cues for learning, but it might also be sub-optimal for the task at hand, and is not guaranteed to satisfy the specified constraints, which might encode safety, fairness or other application-specific requirements. In order to safely learn from baseline policies, we propose an iterative policy optimization algorithm that alternates between maximizing expected return on the task, minimizing distance to the baseline policy, and projecting the policy onto the constraint-satisfying set. We analyze our algorithm theoretically and provide a finite-time convergence guarantee. In our experiments on five different control tasks, our algorithm consistently outperforms several state-of-the-art baselines, achieving 10 times fewer constraint violations and 40% higher reward on average.} }
Endnote
%0 Conference Paper %T Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies %A Tsung-Yen Yang %A Justinian Rosca %A Karthik Narasimhan %A Peter J Ramadge %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-yang21i %I PMLR %P 11795--11807 %U https://proceedings.mlr.press/v139/yang21i.html %V 139 %X We consider the problem of reinforcement learning when provided with (1) a baseline control policy and (2) a set of constraints that the learner must satisfy. The baseline policy can arise from demonstration data or a teacher agent and may provide useful cues for learning, but it might also be sub-optimal for the task at hand, and is not guaranteed to satisfy the specified constraints, which might encode safety, fairness or other application-specific requirements. In order to safely learn from baseline policies, we propose an iterative policy optimization algorithm that alternates between maximizing expected return on the task, minimizing distance to the baseline policy, and projecting the policy onto the constraint-satisfying set. We analyze our algorithm theoretically and provide a finite-time convergence guarantee. In our experiments on five different control tasks, our algorithm consistently outperforms several state-of-the-art baselines, achieving 10 times fewer constraint violations and 40% higher reward on average.
APA
Yang, T., Rosca, J., Narasimhan, K. & Ramadge, P.J.. (2021). Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11795-11807 Available from https://proceedings.mlr.press/v139/yang21i.html.

Related Material