Stable Reinforcement Learning with Unbounded State Space
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:581-581, 2020.
We consider the problem of reinforcement learning (RL) with unbounded state space motivated by the classical problem of scheduling in a queueing network. We argue that a reasonable RL policy for such settings must be based on online training, since any policy based only on finite samples cannot perform well in the entire unbounded state space. We introduce such an online RL policy using Sparse-Sampling-based Monte Carlo Oracle. To analyze this policy, we propose an appropriate notion of desirable performance in terms of stability: the state dynamics under the policy should remain in a bounded region with high probability. We show that if the system dynamics under optimal policy respects a Lyapunov function, then our policy is stable. Our policy does not need to know the Lyapunov function. Moreover, the assumption of existence Lyapunov function is not restrictive as this assumption is equivalent to the positive recurrence or stability property of any Markov chain, i.e., if there is any policy that can stabilize the system then it must posses a Lyapunov function.