Scaling Up Robust MDPs using Function Approximation

Aviv Tamar, Shie Mannor, Huan Xu
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):181-189, 2014.

Abstract

We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handling uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDP paradigm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-tamar14, title = {Scaling Up Robust MDPs using Function Approximation}, author = {Tamar, Aviv and Mannor, Shie and Xu, Huan}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {181--189}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/tamar14.pdf}, url = {https://proceedings.mlr.press/v32/tamar14.html}, abstract = {We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handling uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDP paradigm.} }
Endnote
%0 Conference Paper %T Scaling Up Robust MDPs using Function Approximation %A Aviv Tamar %A Shie Mannor %A Huan Xu %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-tamar14 %I PMLR %P 181--189 %U https://proceedings.mlr.press/v32/tamar14.html %V 32 %N 2 %X We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handling uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDP paradigm.
RIS
TY - CPAPER TI - Scaling Up Robust MDPs using Function Approximation AU - Aviv Tamar AU - Shie Mannor AU - Huan Xu BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-tamar14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 181 EP - 189 L1 - http://proceedings.mlr.press/v32/tamar14.pdf UR - https://proceedings.mlr.press/v32/tamar14.html AB - We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handling uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDP paradigm. ER -
APA
Tamar, A., Mannor, S. & Xu, H.. (2014). Scaling Up Robust MDPs using Function Approximation. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):181-189 Available from https://proceedings.mlr.press/v32/tamar14.html.

Related Material