Finite Sample Analysis of Mean-Volatility Actor-Critic for Risk-Averse Reinforcement Learning

Khaled Eldowa, Lorenzo Bisi, Marcello Restelli
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10028-10066, 2022.

Abstract

The goal in the standard reinforcement learning problem is to find a policy that optimizes the expected return. However, such an objective is not adequate in a lot of real-life applications, like finance, where controlling the uncertainty of the outcome is imperative. The mean-volatility objective penalizes, through a tunable parameter, policies with high variance of the per-step reward. An interesting property of this objective is that it admits simple linear Bellman equations that resemble, up to a reward transformation, those of the risk-neutral case. However, the required reward transformation is policy-dependent, and requires the (usually unknown) expected return of the used policy. In this work, we propose two general methods for policy evaluation under the mean-volatility objective: the direct method and the factored method. We then extend recent results for finite sample analysis in the risk-neutral actor-critic setting to the mean-volatility case. Our analysis shows that the sample complexity to attain an $\epsilon$-accurate stationary point is the same as that of the risk-neutral version, using either policy evaluation method for training the critic. Finally, we carry out experiments to test the proposed methods in a simple environment that exhibits some trade-off between optimality, in expectation, and uncertainty of outcome.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-eldowa22a, title = { Finite Sample Analysis of Mean-Volatility Actor-Critic for Risk-Averse Reinforcement Learning }, author = {Eldowa, Khaled and Bisi, Lorenzo and Restelli, Marcello}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10028--10066}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/eldowa22a/eldowa22a.pdf}, url = {https://proceedings.mlr.press/v151/eldowa22a.html}, abstract = { The goal in the standard reinforcement learning problem is to find a policy that optimizes the expected return. However, such an objective is not adequate in a lot of real-life applications, like finance, where controlling the uncertainty of the outcome is imperative. The mean-volatility objective penalizes, through a tunable parameter, policies with high variance of the per-step reward. An interesting property of this objective is that it admits simple linear Bellman equations that resemble, up to a reward transformation, those of the risk-neutral case. However, the required reward transformation is policy-dependent, and requires the (usually unknown) expected return of the used policy. In this work, we propose two general methods for policy evaluation under the mean-volatility objective: the direct method and the factored method. We then extend recent results for finite sample analysis in the risk-neutral actor-critic setting to the mean-volatility case. Our analysis shows that the sample complexity to attain an $\epsilon$-accurate stationary point is the same as that of the risk-neutral version, using either policy evaluation method for training the critic. Finally, we carry out experiments to test the proposed methods in a simple environment that exhibits some trade-off between optimality, in expectation, and uncertainty of outcome. } }
Endnote
%0 Conference Paper %T Finite Sample Analysis of Mean-Volatility Actor-Critic for Risk-Averse Reinforcement Learning %A Khaled Eldowa %A Lorenzo Bisi %A Marcello Restelli %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-eldowa22a %I PMLR %P 10028--10066 %U https://proceedings.mlr.press/v151/eldowa22a.html %V 151 %X The goal in the standard reinforcement learning problem is to find a policy that optimizes the expected return. However, such an objective is not adequate in a lot of real-life applications, like finance, where controlling the uncertainty of the outcome is imperative. The mean-volatility objective penalizes, through a tunable parameter, policies with high variance of the per-step reward. An interesting property of this objective is that it admits simple linear Bellman equations that resemble, up to a reward transformation, those of the risk-neutral case. However, the required reward transformation is policy-dependent, and requires the (usually unknown) expected return of the used policy. In this work, we propose two general methods for policy evaluation under the mean-volatility objective: the direct method and the factored method. We then extend recent results for finite sample analysis in the risk-neutral actor-critic setting to the mean-volatility case. Our analysis shows that the sample complexity to attain an $\epsilon$-accurate stationary point is the same as that of the risk-neutral version, using either policy evaluation method for training the critic. Finally, we carry out experiments to test the proposed methods in a simple environment that exhibits some trade-off between optimality, in expectation, and uncertainty of outcome.
APA
Eldowa, K., Bisi, L. & Restelli, M.. (2022). Finite Sample Analysis of Mean-Volatility Actor-Critic for Risk-Averse Reinforcement Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10028-10066 Available from https://proceedings.mlr.press/v151/eldowa22a.html.

Related Material