A Model Selection Approach for Corruption Robust Reinforcement Learning

Chen-Yu Wei, Christoph Dann, Julian Zimmert
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:1043-1096, 2022.

Abstract

We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of $\tilde{O}(\min\{\frac{1}{\Delta}, \sqrt{T}\}+C)$ where $T$ is the number of episodes, $C$ is the total amount of corruption, and $\Delta$ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of $C$, improving previous results of Lykouris et al. (2021), Chen et al. (2021), Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of $\tilde{O}(\sqrt{(1+C)T})$, and another computationally inefficient one with $\tilde{O}(\sqrt{T}+C)$, improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-wei22a, title = {A Model Selection Approach for Corruption Robust Reinforcement Learning}, author = {Wei, Chen-Yu and Dann, Christoph and Zimmert, Julian}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {1043--1096}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/wei22a/wei22a.pdf}, url = {https://proceedings.mlr.press/v167/wei22a.html}, abstract = {We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of $\tilde{O}(\min\{\frac{1}{\Delta}, \sqrt{T}\}+C)$ where $T$ is the number of episodes, $C$ is the total amount of corruption, and $\Delta$ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of $C$, improving previous results of Lykouris et al. (2021), Chen et al. (2021), Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of $\tilde{O}(\sqrt{(1+C)T})$, and another computationally inefficient one with $\tilde{O}(\sqrt{T}+C)$, improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results. } }
Endnote
%0 Conference Paper %T A Model Selection Approach for Corruption Robust Reinforcement Learning %A Chen-Yu Wei %A Christoph Dann %A Julian Zimmert %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-wei22a %I PMLR %P 1043--1096 %U https://proceedings.mlr.press/v167/wei22a.html %V 167 %X We develop a model selection approach to tackle reinforcement learning with adversarial corruption in both transition and reward. For finite-horizon tabular MDPs, without prior knowledge on the total amount of corruption, our algorithm achieves a regret bound of $\tilde{O}(\min\{\frac{1}{\Delta}, \sqrt{T}\}+C)$ where $T$ is the number of episodes, $C$ is the total amount of corruption, and $\Delta$ is the reward gap between the best and the second-best policy. This is the first worst-case optimal bound achieved without knowledge of $C$, improving previous results of Lykouris et al. (2021), Chen et al. (2021), Wu et al. (2021). For finite-horizon linear MDPs, we develop a computationally efficient algorithm with a regret bound of $\tilde{O}(\sqrt{(1+C)T})$, and another computationally inefficient one with $\tilde{O}(\sqrt{T}+C)$, improving the result of Lykouris et al. (2021) and answering an open question by Zhang et al. (2021). Finally, our model selection framework can be easily applied to other settings including linear bandits, linear contextual bandits, and MDPs with general function approximation, leading to several improved or new results.
APA
Wei, C., Dann, C. & Zimmert, J.. (2022). A Model Selection Approach for Corruption Robust Reinforcement Learning. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:1043-1096 Available from https://proceedings.mlr.press/v167/wei22a.html.

Related Material