Provable Reset-free Reinforcement Learning by No-Regret Reduction

Hoai-An Nguyen, Ching-An Cheng
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:25939-25955, 2023.

Abstract

Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-nguyen23b, title = {Provable Reset-free Reinforcement Learning by No-Regret Reduction}, author = {Nguyen, Hoai-An and Cheng, Ching-An}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {25939--25955}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/nguyen23b/nguyen23b.pdf}, url = {https://proceedings.mlr.press/v202/nguyen23b.html}, abstract = {Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.} }
Endnote
%0 Conference Paper %T Provable Reset-free Reinforcement Learning by No-Regret Reduction %A Hoai-An Nguyen %A Ching-An Cheng %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-nguyen23b %I PMLR %P 25939--25955 %U https://proceedings.mlr.press/v202/nguyen23b.html %V 202 %X Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.
APA
Nguyen, H. & Cheng, C.. (2023). Provable Reset-free Reinforcement Learning by No-Regret Reduction. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:25939-25955 Available from https://proceedings.mlr.press/v202/nguyen23b.html.

Related Material