Byzantine-Robust Online and Offline Distributed Reinforcement Learning

Yiding Chen, Xuezhou Zhang, Kaiqing Zhang, Mengdi Wang, Xiaojin Zhu
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3230-3269, 2023.

Abstract

We consider a distributed reinforcement learning setting where multiple agents separately explore the environment and communicate their experiences through a central server. However, $\alpha$-fraction of agents are adversarial and can report arbitrary fake information. Critically, these adversarial agents can collude and their fake data can be of any sizes. We desire to robustly identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents. Our main technical contribution is COW, a novel algorithm for the robust mean estimation from batches problem, that can handle arbitrary batch sizes. Building upon this new estimator, in the offline setting, we design a Byzantine-robust distributed pessimistic value iteration algorithm; in the online setting, we design a Byzantine-robust distributed optimistic value iteration algorithm. Both algorithms obtain near-optimal sample complexities and achieve superior robustness guarantee than prior works.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-chen23b, title = {Byzantine-Robust Online and Offline Distributed Reinforcement Learning}, author = {Chen, Yiding and Zhang, Xuezhou and Zhang, Kaiqing and Wang, Mengdi and Zhu, Xiaojin}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {3230--3269}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/chen23b/chen23b.pdf}, url = {https://proceedings.mlr.press/v206/chen23b.html}, abstract = {We consider a distributed reinforcement learning setting where multiple agents separately explore the environment and communicate their experiences through a central server. However, $\alpha$-fraction of agents are adversarial and can report arbitrary fake information. Critically, these adversarial agents can collude and their fake data can be of any sizes. We desire to robustly identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents. Our main technical contribution is COW, a novel algorithm for the robust mean estimation from batches problem, that can handle arbitrary batch sizes. Building upon this new estimator, in the offline setting, we design a Byzantine-robust distributed pessimistic value iteration algorithm; in the online setting, we design a Byzantine-robust distributed optimistic value iteration algorithm. Both algorithms obtain near-optimal sample complexities and achieve superior robustness guarantee than prior works.} }
Endnote
%0 Conference Paper %T Byzantine-Robust Online and Offline Distributed Reinforcement Learning %A Yiding Chen %A Xuezhou Zhang %A Kaiqing Zhang %A Mengdi Wang %A Xiaojin Zhu %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-chen23b %I PMLR %P 3230--3269 %U https://proceedings.mlr.press/v206/chen23b.html %V 206 %X We consider a distributed reinforcement learning setting where multiple agents separately explore the environment and communicate their experiences through a central server. However, $\alpha$-fraction of agents are adversarial and can report arbitrary fake information. Critically, these adversarial agents can collude and their fake data can be of any sizes. We desire to robustly identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents. Our main technical contribution is COW, a novel algorithm for the robust mean estimation from batches problem, that can handle arbitrary batch sizes. Building upon this new estimator, in the offline setting, we design a Byzantine-robust distributed pessimistic value iteration algorithm; in the online setting, we design a Byzantine-robust distributed optimistic value iteration algorithm. Both algorithms obtain near-optimal sample complexities and achieve superior robustness guarantee than prior works.
APA
Chen, Y., Zhang, X., Zhang, K., Wang, M. & Zhu, X.. (2023). Byzantine-Robust Online and Offline Distributed Reinforcement Learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:3230-3269 Available from https://proceedings.mlr.press/v206/chen23b.html.

Related Material