Stable Reinforcement Learning with Unbounded State Space

Devavrat Shah, Qiaomin Xie, Zhi Xu
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:581-581, 2020.

Abstract

We consider the problem of reinforcement learning (RL) with unbounded state space motivated by the classical problem of scheduling in a queueing network. We argue that a reasonable RL policy for such settings must be based on online training, since any policy based only on finite samples cannot perform well in the entire unbounded state space. We introduce such an online RL policy using Sparse-Sampling-based Monte Carlo Oracle. To analyze this policy, we propose an appropriate notion of desirable performance in terms of stability: the state dynamics under the policy should remain in a bounded region with high probability. We show that if the system dynamics under optimal policy respects a Lyapunov function, then our policy is stable. Our policy does not need to know the Lyapunov function. Moreover, the assumption of existence Lyapunov function is not restrictive as this assumption is equivalent to the positive recurrence or stability property of any Markov chain, i.e., if there is any policy that can stabilize the system then it must posses a Lyapunov function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-shah20a, title = {Stable Reinforcement Learning with Unbounded State Space}, author = {Shah, Devavrat and Xie, Qiaomin and Xu, Zhi}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {581--581}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/shah20a/shah20a.pdf}, url = {https://proceedings.mlr.press/v120/shah20a.html}, abstract = {We consider the problem of reinforcement learning (RL) with unbounded state space motivated by the classical problem of scheduling in a queueing network. We argue that a reasonable RL policy for such settings must be based on online training, since any policy based only on finite samples cannot perform well in the entire unbounded state space. We introduce such an online RL policy using Sparse-Sampling-based Monte Carlo Oracle. To analyze this policy, we propose an appropriate notion of desirable performance in terms of stability: the state dynamics under the policy should remain in a bounded region with high probability. We show that if the system dynamics under optimal policy respects a Lyapunov function, then our policy is stable. Our policy does not need to know the Lyapunov function. Moreover, the assumption of existence Lyapunov function is not restrictive as this assumption is equivalent to the positive recurrence or stability property of any Markov chain, i.e., if there is any policy that can stabilize the system then it must posses a Lyapunov function.} }
Endnote
%0 Conference Paper %T Stable Reinforcement Learning with Unbounded State Space %A Devavrat Shah %A Qiaomin Xie %A Zhi Xu %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-shah20a %I PMLR %P 581--581 %U https://proceedings.mlr.press/v120/shah20a.html %V 120 %X We consider the problem of reinforcement learning (RL) with unbounded state space motivated by the classical problem of scheduling in a queueing network. We argue that a reasonable RL policy for such settings must be based on online training, since any policy based only on finite samples cannot perform well in the entire unbounded state space. We introduce such an online RL policy using Sparse-Sampling-based Monte Carlo Oracle. To analyze this policy, we propose an appropriate notion of desirable performance in terms of stability: the state dynamics under the policy should remain in a bounded region with high probability. We show that if the system dynamics under optimal policy respects a Lyapunov function, then our policy is stable. Our policy does not need to know the Lyapunov function. Moreover, the assumption of existence Lyapunov function is not restrictive as this assumption is equivalent to the positive recurrence or stability property of any Markov chain, i.e., if there is any policy that can stabilize the system then it must posses a Lyapunov function.
APA
Shah, D., Xie, Q. & Xu, Z.. (2020). Stable Reinforcement Learning with Unbounded State Space. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:581-581 Available from https://proceedings.mlr.press/v120/shah20a.html.

Related Material