Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning

Daniel Tabas, Ahmed S Zamzam, Baosen Zhang
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:1205-1217, 2023.

Abstract

Constrained multiagent reinforcement learning (C-MARL) is gaining importance as MARL algorithms find new applications in real-world systems ranging from energy systems to drone swarms. Most C-MARL algorithms use a primal-dual approach to enforce constraints through a penalty function added to the reward. In this paper, we study the structural effects of this penalty term on the MARL problem. First, we show that the standard practice of using the constraint function as the penalty leads to a weak notion of safety. However, by making simple modifications to the penalty term, we can enforce meaningful probabilistic (chance and conditional value at risk) constraints. Second, we quantify the effect of the penalty term on the value function, uncovering an improved value estimation procedure. We use these insights to propose a constrained multiagent advantage actor critic (C-MAA2C) algorithm. Simulations in a simple constrained multiagent environment affirm that our reinterpretation of the primal-dual method in terms of probabilistic constraints is effective, and that our proposed value estimate accelerates convergence to a safe joint policy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-tabas23a, title = {Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning}, author = {Tabas, Daniel and Zamzam, Ahmed S and Zhang, Baosen}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {1205--1217}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/tabas23a/tabas23a.pdf}, url = {https://proceedings.mlr.press/v211/tabas23a.html}, abstract = {Constrained multiagent reinforcement learning (C-MARL) is gaining importance as MARL algorithms find new applications in real-world systems ranging from energy systems to drone swarms. Most C-MARL algorithms use a primal-dual approach to enforce constraints through a penalty function added to the reward. In this paper, we study the structural effects of this penalty term on the MARL problem. First, we show that the standard practice of using the constraint function as the penalty leads to a weak notion of safety. However, by making simple modifications to the penalty term, we can enforce meaningful probabilistic (chance and conditional value at risk) constraints. Second, we quantify the effect of the penalty term on the value function, uncovering an improved value estimation procedure. We use these insights to propose a constrained multiagent advantage actor critic (C-MAA2C) algorithm. Simulations in a simple constrained multiagent environment affirm that our reinterpretation of the primal-dual method in terms of probabilistic constraints is effective, and that our proposed value estimate accelerates convergence to a safe joint policy.} }
Endnote
%0 Conference Paper %T Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning %A Daniel Tabas %A Ahmed S Zamzam %A Baosen Zhang %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-tabas23a %I PMLR %P 1205--1217 %U https://proceedings.mlr.press/v211/tabas23a.html %V 211 %X Constrained multiagent reinforcement learning (C-MARL) is gaining importance as MARL algorithms find new applications in real-world systems ranging from energy systems to drone swarms. Most C-MARL algorithms use a primal-dual approach to enforce constraints through a penalty function added to the reward. In this paper, we study the structural effects of this penalty term on the MARL problem. First, we show that the standard practice of using the constraint function as the penalty leads to a weak notion of safety. However, by making simple modifications to the penalty term, we can enforce meaningful probabilistic (chance and conditional value at risk) constraints. Second, we quantify the effect of the penalty term on the value function, uncovering an improved value estimation procedure. We use these insights to propose a constrained multiagent advantage actor critic (C-MAA2C) algorithm. Simulations in a simple constrained multiagent environment affirm that our reinterpretation of the primal-dual method in terms of probabilistic constraints is effective, and that our proposed value estimate accelerates convergence to a safe joint policy.
APA
Tabas, D., Zamzam, A.S. & Zhang, B.. (2023). Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:1205-1217 Available from https://proceedings.mlr.press/v211/tabas23a.html.

Related Material