Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control

Lukas Kesper, Sebastian Trimpe, Dominik Baumann
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:1072-1085, 2023.

Abstract

Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of communication and control policies provides an alternative. Nevertheless, existing methods typically consider single-agent settings. This paper proposes a model-free reinforcement learning algorithm that jointly learns resource-aware communication and control policies for distributed multi-agent systems from data. We evaluate the algorithm in a high-dimensional and nonlinear simulation example and discuss promising avenues for further research.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-kesper23a, title = {Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control}, author = {Kesper, Lukas and Trimpe, Sebastian and Baumann, Dominik}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {1072--1085}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/kesper23a/kesper23a.pdf}, url = {https://proceedings.mlr.press/v211/kesper23a.html}, abstract = {Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of communication and control policies provides an alternative. Nevertheless, existing methods typically consider single-agent settings. This paper proposes a model-free reinforcement learning algorithm that jointly learns resource-aware communication and control policies for distributed multi-agent systems from data. We evaluate the algorithm in a high-dimensional and nonlinear simulation example and discuss promising avenues for further research.} }
Endnote
%0 Conference Paper %T Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control %A Lukas Kesper %A Sebastian Trimpe %A Dominik Baumann %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-kesper23a %I PMLR %P 1072--1085 %U https://proceedings.mlr.press/v211/kesper23a.html %V 211 %X Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of communication and control policies provides an alternative. Nevertheless, existing methods typically consider single-agent settings. This paper proposes a model-free reinforcement learning algorithm that jointly learns resource-aware communication and control policies for distributed multi-agent systems from data. We evaluate the algorithm in a high-dimensional and nonlinear simulation example and discuss promising avenues for further research.
APA
Kesper, L., Trimpe, S. & Baumann, D.. (2023). Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:1072-1085 Available from https://proceedings.mlr.press/v211/kesper23a.html.

Related Material