Linear Programming for Large-Scale Markov Decision Problems

Alan Malek, Yasin Abbasi-Yadkori, Peter Bartlett
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):496-504, 2014.

Abstract

We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-malek14, title = {Linear Programming for Large-Scale Markov Decision Problems}, author = {Malek, Alan and Abbasi-Yadkori, Yasin and Bartlett, Peter}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {496--504}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/malek14.pdf}, url = {https://proceedings.mlr.press/v32/malek14.html}, abstract = {We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.} }
Endnote
%0 Conference Paper %T Linear Programming for Large-Scale Markov Decision Problems %A Alan Malek %A Yasin Abbasi-Yadkori %A Peter Bartlett %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-malek14 %I PMLR %P 496--504 %U https://proceedings.mlr.press/v32/malek14.html %V 32 %N 2 %X We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.
RIS
TY - CPAPER TI - Linear Programming for Large-Scale Markov Decision Problems AU - Alan Malek AU - Yasin Abbasi-Yadkori AU - Peter Bartlett BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-malek14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 496 EP - 504 L1 - http://proceedings.mlr.press/v32/malek14.pdf UR - https://proceedings.mlr.press/v32/malek14.html AB - We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application. ER -
APA
Malek, A., Abbasi-Yadkori, Y. & Bartlett, P.. (2014). Linear Programming for Large-Scale Markov Decision Problems. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):496-504 Available from https://proceedings.mlr.press/v32/malek14.html.

Related Material