Optimizing Percentile Criterion using Robust MDPs

Bahram Behzadian, Reazul Hasan Russel, Marek Petrik, Chin Pang Ho
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1009-1017, 2021.

Abstract

We address the problem of computing reliable policies in reinforcement learning problems with limited data. In particular, we compute policies that achieve good returns with high confidence when deployed. This objective, known as the percentile criterion, can be optimized using Robust MDPs (RMDPs). RMDPs generalize MDPs to allow for uncertain transition probabilities chosen adversarially from given ambiguity sets. We show that the RMDP solution’s sub-optimality depends on the spans of the ambiguity sets along the value function. We then propose new algorithms that minimize the span of ambiguity sets defined by weighted L1 and L-infinity norms. Our primary focus is on Bayesian guarantees, but we also describe how our methods apply to frequentist guarantees and derive new concentration inequalities for weighted L1 and L-infinity norms. Experimental results indicate that our optimized ambiguity sets improve significantly on prior construction methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-behzadian21a, title = { Optimizing Percentile Criterion using Robust MDPs }, author = {Behzadian, Bahram and Hasan Russel, Reazul and Petrik, Marek and Pang Ho, Chin}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1009--1017}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/behzadian21a/behzadian21a.pdf}, url = {https://proceedings.mlr.press/v130/behzadian21a.html}, abstract = { We address the problem of computing reliable policies in reinforcement learning problems with limited data. In particular, we compute policies that achieve good returns with high confidence when deployed. This objective, known as the percentile criterion, can be optimized using Robust MDPs (RMDPs). RMDPs generalize MDPs to allow for uncertain transition probabilities chosen adversarially from given ambiguity sets. We show that the RMDP solution’s sub-optimality depends on the spans of the ambiguity sets along the value function. We then propose new algorithms that minimize the span of ambiguity sets defined by weighted L1 and L-infinity norms. Our primary focus is on Bayesian guarantees, but we also describe how our methods apply to frequentist guarantees and derive new concentration inequalities for weighted L1 and L-infinity norms. Experimental results indicate that our optimized ambiguity sets improve significantly on prior construction methods. } }
Endnote
%0 Conference Paper %T Optimizing Percentile Criterion using Robust MDPs %A Bahram Behzadian %A Reazul Hasan Russel %A Marek Petrik %A Chin Pang Ho %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-behzadian21a %I PMLR %P 1009--1017 %U https://proceedings.mlr.press/v130/behzadian21a.html %V 130 %X We address the problem of computing reliable policies in reinforcement learning problems with limited data. In particular, we compute policies that achieve good returns with high confidence when deployed. This objective, known as the percentile criterion, can be optimized using Robust MDPs (RMDPs). RMDPs generalize MDPs to allow for uncertain transition probabilities chosen adversarially from given ambiguity sets. We show that the RMDP solution’s sub-optimality depends on the spans of the ambiguity sets along the value function. We then propose new algorithms that minimize the span of ambiguity sets defined by weighted L1 and L-infinity norms. Our primary focus is on Bayesian guarantees, but we also describe how our methods apply to frequentist guarantees and derive new concentration inequalities for weighted L1 and L-infinity norms. Experimental results indicate that our optimized ambiguity sets improve significantly on prior construction methods.
APA
Behzadian, B., Hasan Russel, R., Petrik, M. & Pang Ho, C.. (2021). Optimizing Percentile Criterion using Robust MDPs . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1009-1017 Available from https://proceedings.mlr.press/v130/behzadian21a.html.

Related Material