Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes

Chen-Yu Wei, Mehdi Jafarnia Jahromi, Haipeng Luo, Hiteshi Sharma, Rahul Jain
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10170-10180, 2020.

Abstract

Model-free reinforcement learning is known to be memory and computation efficient and more amendable to large scale problems. In this paper, two model-free algorithms are introduced for learning infinite-horizon average-reward Markov Decision Processes (MDPs). The first algorithm reduces the problem to the discounted-reward version and achieves $\mathcal{O}(T^{2/3})$ regret after $T$ steps, under the minimal assumption of weakly communicating MDPs. To our knowledge, this is the first model-free algorithm for general MDPs in this setting. The second algorithm makes use of recent advances in adaptive algorithms for adversarial multi-armed bandits and improves the regret to $\mathcal{O}(\sqrt{T})$, albeit with a stronger ergodic assumption. This result significantly improves over the $\mathcal{O}(T^{3/4})$ regret achieved by the only existing model-free algorithm by Abbasi-Yadkori et al. (2019) for ergodic MDPs in the infinite-horizon average-reward setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wei20c, title = {Model-free Reinforcement Learning in Infinite-horizon Average-reward {M}arkov Decision Processes}, author = {Wei, Chen-Yu and Jahromi, Mehdi Jafarnia and Luo, Haipeng and Sharma, Hiteshi and Jain, Rahul}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10170--10180}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wei20c/wei20c.pdf}, url = {https://proceedings.mlr.press/v119/wei20c.html}, abstract = {Model-free reinforcement learning is known to be memory and computation efficient and more amendable to large scale problems. In this paper, two model-free algorithms are introduced for learning infinite-horizon average-reward Markov Decision Processes (MDPs). The first algorithm reduces the problem to the discounted-reward version and achieves $\mathcal{O}(T^{2/3})$ regret after $T$ steps, under the minimal assumption of weakly communicating MDPs. To our knowledge, this is the first model-free algorithm for general MDPs in this setting. The second algorithm makes use of recent advances in adaptive algorithms for adversarial multi-armed bandits and improves the regret to $\mathcal{O}(\sqrt{T})$, albeit with a stronger ergodic assumption. This result significantly improves over the $\mathcal{O}(T^{3/4})$ regret achieved by the only existing model-free algorithm by Abbasi-Yadkori et al. (2019) for ergodic MDPs in the infinite-horizon average-reward setting.} }
Endnote
%0 Conference Paper %T Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes %A Chen-Yu Wei %A Mehdi Jafarnia Jahromi %A Haipeng Luo %A Hiteshi Sharma %A Rahul Jain %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wei20c %I PMLR %P 10170--10180 %U https://proceedings.mlr.press/v119/wei20c.html %V 119 %X Model-free reinforcement learning is known to be memory and computation efficient and more amendable to large scale problems. In this paper, two model-free algorithms are introduced for learning infinite-horizon average-reward Markov Decision Processes (MDPs). The first algorithm reduces the problem to the discounted-reward version and achieves $\mathcal{O}(T^{2/3})$ regret after $T$ steps, under the minimal assumption of weakly communicating MDPs. To our knowledge, this is the first model-free algorithm for general MDPs in this setting. The second algorithm makes use of recent advances in adaptive algorithms for adversarial multi-armed bandits and improves the regret to $\mathcal{O}(\sqrt{T})$, albeit with a stronger ergodic assumption. This result significantly improves over the $\mathcal{O}(T^{3/4})$ regret achieved by the only existing model-free algorithm by Abbasi-Yadkori et al. (2019) for ergodic MDPs in the infinite-horizon average-reward setting.
APA
Wei, C., Jahromi, M.J., Luo, H., Sharma, H. & Jain, R.. (2020). Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10170-10180 Available from https://proceedings.mlr.press/v119/wei20c.html.

Related Material