Reward Biased Maximum Likelihood Estimation for Reinforcement Learning

Akshay Mete, Rahul Singh, Xi Liu, P. R. Kumar

Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:815-827, 2021.

Abstract

The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of Markov chains was proposed in (Kumar and Becker, 1982) to overcome the central obstacle of what is variously called the fundamental “closed-identifiability problem” of adaptive control (Borkar and Varaiya, 1979), the “dual control problem” by Feldbaum (Feldbaum, 1960a,b), or, contemporaneously, the “exploration vs. exploitation problem”. It exploited the key observation that since the maximum likelihood parameter estimator can asymptotically identify the closed-transition probabilities under a certainty equivalent approach (Borkar and Varaiya, 1979), the limiting parameter estimates must necessarily have an optimal reward that is less than the optimal reward attainable for the true but unknown system. Hence it proposed a counteracting reverse bias in favor of parameters with larger optimal rewards, providing a carefully structured solution to the fundamental problem alluded to above. It thereby proposed an optimistic approach of favoring parameters with larger optimal rewards, now known as “optimism in the face of uncertainty.” The RBMLE approach has been proved to be long-term average reward optimal in a variety of contexts including controlled Markov chains, linear quadratic Gaussian (LQG) systems, some nonlinear systems, and diffusions. However, modern attention is focused on the much finer notion of “regret,” or finite-time performance for all time, espoused by (Lai and Robbins, 1985). Recent analysis of RBMLE for multi-armed stochastic bandits (Liu et al., 2020) and linear contextual bandits (Hung et al., 2020) has shown that it not only has state-of-the-art regret, but it also exhibits empirical performance comparable to or better than the best current contenders, and leads to several new and strikingly simple index policies for these classical problems. Motivated by this, we examine the finite-time performance of RBMLE for reinforcement learning tasks that involve the general problem of optimal control of unknown Markov Decision Processes. We show that it has a regret of O(log T ) over a time horizon of T, similar to state-of-art algorithms.

Cite this Paper

BibTeX

@InProceedings{pmlr-v144-mete21a,
title = {Reward Biased Maximum Likelihood Estimation for Reinforcement Learning},
author = {Mete, Akshay and Singh, Rahul and Liu, Xi and Kumar, P. R.},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
pages = {815--827},
year = {2021},
editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.},
volume = {144},
series = {Proceedings of Machine Learning Research},
month = {07 -- 08 June},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v144/mete21a/mete21a.pdf},
url = {https://proceedings.mlr.press/v144/mete21a.html},
abstract = {The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of Markov chains was proposed in (Kumar and Becker, 1982) to overcome the central obstacle of what is variously called the fundamental “closed-identifiability problem” of adaptive control (Borkar and Varaiya, 1979), the “dual control problem” by Feldbaum (Feldbaum, 1960a,b), or, contemporaneously, the “exploration vs. exploitation problem”. It exploited the key observation that since the maximum likelihood parameter estimator can asymptotically identify the closed-transition probabilities under a certainty equivalent approach (Borkar and Varaiya, 1979), the limiting parameter estimates must necessarily have an optimal reward that is less than the optimal reward attainable for the true but unknown system. Hence it proposed a counteracting reverse bias in favor of parameters with larger optimal rewards, providing a carefully structured solution to the fundamental problem alluded to above. It thereby proposed an optimistic approach of favoring parameters with larger optimal rewards, now known as “optimism in the face of uncertainty.” The RBMLE approach has been proved to be long-term average reward optimal in a variety of contexts including controlled Markov chains, linear quadratic Gaussian (LQG) systems, some nonlinear systems, and diffusions. However, modern attention is focused on the much finer notion of “regret,” or finite-time performance for all time, espoused by (Lai and Robbins, 1985). Recent analysis of RBMLE for multi-armed stochastic bandits (Liu et al., 2020) and linear contextual bandits (Hung et al., 2020) has shown that it not only has state-of-the-art regret, but it also exhibits empirical performance comparable to or better than the best current contenders, and leads to several new and strikingly simple index policies for these classical problems. Motivated by this, we examine the finite-time performance of RBMLE for reinforcement learning tasks that involve the general problem of optimal control of unknown Markov Decision Processes. We show that it has a regret of O(log T ) over a time horizon of T, similar to state-of-art algorithms.}
}

Endnote

%0 Conference Paper
%T Reward Biased Maximum Likelihood Estimation for Reinforcement Learning
%A Akshay Mete
%A Rahul Singh
%A Xi Liu
%A P. R. Kumar
%B Proceedings of the 3rd Conference on Learning for Dynamics and Control
%C Proceedings of Machine Learning Research
%D 2021
%E Ali Jadbabaie
%E John Lygeros
%E George J. Pappas
%E Pablo A. Parrilo
%E Benjamin Recht
%E Claire J. Tomlin
%E Melanie N. Zeilinger
%F pmlr-v144-mete21a
%I PMLR
%P 815--827
%U https://proceedings.mlr.press/v144/mete21a.html
%V 144
%X The Reward-Biased Maximum Likelihood Estimate (RBMLE) for adaptive control of Markov chains was proposed in (Kumar and Becker, 1982) to overcome the central obstacle of what is variously called the fundamental “closed-identifiability problem” of adaptive control (Borkar and Varaiya, 1979), the “dual control problem” by Feldbaum (Feldbaum, 1960a,b), or, contemporaneously, the “exploration vs. exploitation problem”. It exploited the key observation that since the maximum likelihood parameter estimator can asymptotically identify the closed-transition probabilities under a certainty equivalent approach (Borkar and Varaiya, 1979), the limiting parameter estimates must necessarily have an optimal reward that is less than the optimal reward attainable for the true but unknown system. Hence it proposed a counteracting reverse bias in favor of parameters with larger optimal rewards, providing a carefully structured solution to the fundamental problem alluded to above. It thereby proposed an optimistic approach of favoring parameters with larger optimal rewards, now known as “optimism in the face of uncertainty.” The RBMLE approach has been proved to be long-term average reward optimal in a variety of contexts including controlled Markov chains, linear quadratic Gaussian (LQG) systems, some nonlinear systems, and diffusions. However, modern attention is focused on the much finer notion of “regret,” or finite-time performance for all time, espoused by (Lai and Robbins, 1985). Recent analysis of RBMLE for multi-armed stochastic bandits (Liu et al., 2020) and linear contextual bandits (Hung et al., 2020) has shown that it not only has state-of-the-art regret, but it also exhibits empirical performance comparable to or better than the best current contenders, and leads to several new and strikingly simple index policies for these classical problems. Motivated by this, we examine the finite-time performance of RBMLE for reinforcement learning tasks that involve the general problem of optimal control of unknown Markov Decision Processes. We show that it has a regret of O(log T ) over a time horizon of T, similar to state-of-art algorithms.

APA

Mete, A., Singh, R., Liu, X. & Kumar, P.R.. (2021). Reward Biased Maximum Likelihood Estimation for Reinforcement Learning. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:815-827 Available from https://proceedings.mlr.press/v144/mete21a.html.