Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs

Yashaswini Murthy, Mehrdad Moharrami, R. Srikant
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:395-406, 2023.

Abstract

Modified policy iteration (MPI) also known as optimistic policy iteration is at the core of many reinforcement learning algorithms. It works by combining elements of policy iteration and value iteration. The convergence of MPI has been well studied in the case of discounted and average-cost MDPs. In this work, we consider the exponential cost risk-sensitive MDP formulation, which is known to provide some robustness to model parameters. Although policy iteration and value iteration have been well studied in the context of risk sensitive MDPs, modified policy iteration is relatively unexplored. We provide the first proof that MPI also converges for the risk-sensitive problem in the case of finite state and action spaces. Since the exponential cost formulation deals with the multiplicative Bellman equation, our main contribution is a convergence proof which is quite different than existing results for discounted and risk-neutral average-cost problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-murthy23a, title = {Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs}, author = {Murthy, Yashaswini and Moharrami, Mehrdad and Srikant, R.}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {395--406}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/murthy23a/murthy23a.pdf}, url = {https://proceedings.mlr.press/v211/murthy23a.html}, abstract = {Modified policy iteration (MPI) also known as optimistic policy iteration is at the core of many reinforcement learning algorithms. It works by combining elements of policy iteration and value iteration. The convergence of MPI has been well studied in the case of discounted and average-cost MDPs. In this work, we consider the exponential cost risk-sensitive MDP formulation, which is known to provide some robustness to model parameters. Although policy iteration and value iteration have been well studied in the context of risk sensitive MDPs, modified policy iteration is relatively unexplored. We provide the first proof that MPI also converges for the risk-sensitive problem in the case of finite state and action spaces. Since the exponential cost formulation deals with the multiplicative Bellman equation, our main contribution is a convergence proof which is quite different than existing results for discounted and risk-neutral average-cost problems.} }
Endnote
%0 Conference Paper %T Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs %A Yashaswini Murthy %A Mehrdad Moharrami %A R. Srikant %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-murthy23a %I PMLR %P 395--406 %U https://proceedings.mlr.press/v211/murthy23a.html %V 211 %X Modified policy iteration (MPI) also known as optimistic policy iteration is at the core of many reinforcement learning algorithms. It works by combining elements of policy iteration and value iteration. The convergence of MPI has been well studied in the case of discounted and average-cost MDPs. In this work, we consider the exponential cost risk-sensitive MDP formulation, which is known to provide some robustness to model parameters. Although policy iteration and value iteration have been well studied in the context of risk sensitive MDPs, modified policy iteration is relatively unexplored. We provide the first proof that MPI also converges for the risk-sensitive problem in the case of finite state and action spaces. Since the exponential cost formulation deals with the multiplicative Bellman equation, our main contribution is a convergence proof which is quite different than existing results for discounted and risk-neutral average-cost problems.
APA
Murthy, Y., Moharrami, M. & Srikant, R.. (2023). Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:395-406 Available from https://proceedings.mlr.press/v211/murthy23a.html.

Related Material