Online Convex Optimization in Adversarial Markov Decision Processes

Aviv Rosenberg, Yishay Mansour
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5478-5486, 2019.

Abstract

We consider online learning in episodic loop-free Markov decision processes (MDPs), where the loss function can change arbitrarily between episodes, and the transition function is not known to the learner. We show $\tilde{O}(L|X|\sqrt{|A|T})$ regret bound, where $T$ is the number of episodes, $X$ is the state space, $A$ is the action space, and $L$ is the length of each episode. Our online algorithm is implemented using entropic regularization methodology, which allows to extend the original adversarial MDP model to handle convex performance criteria (different ways to aggregate the losses of a single episode) , as well as improve previous regret bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rosenberg19a, title = {Online Convex Optimization in Adversarial {M}arkov Decision Processes}, author = {Rosenberg, Aviv and Mansour, Yishay}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5478--5486}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rosenberg19a/rosenberg19a.pdf}, url = {https://proceedings.mlr.press/v97/rosenberg19a.html}, abstract = {We consider online learning in episodic loop-free Markov decision processes (MDPs), where the loss function can change arbitrarily between episodes, and the transition function is not known to the learner. We show $\tilde{O}(L|X|\sqrt{|A|T})$ regret bound, where $T$ is the number of episodes, $X$ is the state space, $A$ is the action space, and $L$ is the length of each episode. Our online algorithm is implemented using entropic regularization methodology, which allows to extend the original adversarial MDP model to handle convex performance criteria (different ways to aggregate the losses of a single episode) , as well as improve previous regret bounds.} }
Endnote
%0 Conference Paper %T Online Convex Optimization in Adversarial Markov Decision Processes %A Aviv Rosenberg %A Yishay Mansour %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rosenberg19a %I PMLR %P 5478--5486 %U https://proceedings.mlr.press/v97/rosenberg19a.html %V 97 %X We consider online learning in episodic loop-free Markov decision processes (MDPs), where the loss function can change arbitrarily between episodes, and the transition function is not known to the learner. We show $\tilde{O}(L|X|\sqrt{|A|T})$ regret bound, where $T$ is the number of episodes, $X$ is the state space, $A$ is the action space, and $L$ is the length of each episode. Our online algorithm is implemented using entropic regularization methodology, which allows to extend the original adversarial MDP model to handle convex performance criteria (different ways to aggregate the losses of a single episode) , as well as improve previous regret bounds.
APA
Rosenberg, A. & Mansour, Y.. (2019). Online Convex Optimization in Adversarial Markov Decision Processes. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5478-5486 Available from https://proceedings.mlr.press/v97/rosenberg19a.html.

Related Material