[edit]
Reward Biased Maximum Likelihood Estimation for Reinforcement Learning
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:815-827, 2021.
Abstract
The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of Markov chains was proposed in (Kumar and Becker, 1982) to overcome the central obstacle of what is variously called the fundamental “closed-identifiability problem” of adaptive control (Borkar and Varaiya, 1979), the “dual control problem” by Feldbaum (Feldbaum, 1960a,b), or, contemporaneously, the “exploration vs. exploitation problem”. It exploited the key observation that since the maximum likelihood parameter estimator can asymptotically identify the closed-transition probabilities under a certainty equivalent approach (Borkar and Varaiya, 1979), the limiting parameter estimates must necessarily have an optimal reward that is less than the optimal reward attainable for the true but unknown system. Hence it proposed a counteracting reverse bias in favor of parameters with larger optimal rewards, providing a carefully structured solution to the fundamental problem alluded to above. It thereby proposed an optimistic approach of favoring parameters with larger optimal rewards, now known as “optimism in the face of uncertainty.” The RBMLE approach has been proved to be long-term average reward optimal in a variety of contexts including controlled Markov chains, linear quadratic Gaussian (LQG) systems, some nonlinear systems, and diffusions. However, modern attention is focused on the much finer notion of “regret,” or finite-time performance for all time, espoused by (Lai and Robbins, 1985). Recent analysis of RBMLE for multi-armed stochastic bandits (Liu et al., 2020) and linear contextual bandits (Hung et al., 2020) has shown that it not only has state-of-the-art regret, but it also exhibits empirical performance comparable to or better than the best current contenders, and leads to several new and strikingly simple index policies for these classical problems. Motivated by this, we examine the finite-time performance of RBMLE for reinforcement learning tasks that involve the general problem of optimal control of unknown Markov Decision Processes. We show that it has a regret of O(log T ) over a time horizon of T, similar to state-of-art algorithms.