Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm

Sajad Khodadadian, Zaiwei Chen, Siva Theja Maguluri
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:5420-5431, 2021.

Abstract

In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. In particular, we show that the algorithm converges to a global optimal policy with a sample complexity of $\mathcal{O}(\epsilon^{-3}\log^2(1/\epsilon))$ under an appropriate choice of stepsizes. In order to overcome the issue of large variance due to Importance Sampling, we propose the $Q$-trace algorithm for the critic, which is inspired by the V-trace algorithm (Espeholt et al., 2018). This enables us to explicitly control the bias and variance, and characterize the trade-off between them. As an advantage of off-policy sampling, a major feature of our result is that we do not need any additional assumptions, beyond the ergodicity of the Markov chain induced by the behavior policy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-khodadadian21a, title = {Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm}, author = {Khodadadian, Sajad and Chen, Zaiwei and Maguluri, Siva Theja}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {5420--5431}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/khodadadian21a/khodadadian21a.pdf}, url = {https://proceedings.mlr.press/v139/khodadadian21a.html}, abstract = {In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. In particular, we show that the algorithm converges to a global optimal policy with a sample complexity of $\mathcal{O}(\epsilon^{-3}\log^2(1/\epsilon))$ under an appropriate choice of stepsizes. In order to overcome the issue of large variance due to Importance Sampling, we propose the $Q$-trace algorithm for the critic, which is inspired by the V-trace algorithm (Espeholt et al., 2018). This enables us to explicitly control the bias and variance, and characterize the trade-off between them. As an advantage of off-policy sampling, a major feature of our result is that we do not need any additional assumptions, beyond the ergodicity of the Markov chain induced by the behavior policy.} }
Endnote
%0 Conference Paper %T Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm %A Sajad Khodadadian %A Zaiwei Chen %A Siva Theja Maguluri %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-khodadadian21a %I PMLR %P 5420--5431 %U https://proceedings.mlr.press/v139/khodadadian21a.html %V 139 %X In this paper, we provide finite-sample convergence guarantees for an off-policy variant of the natural actor-critic (NAC) algorithm based on Importance Sampling. In particular, we show that the algorithm converges to a global optimal policy with a sample complexity of $\mathcal{O}(\epsilon^{-3}\log^2(1/\epsilon))$ under an appropriate choice of stepsizes. In order to overcome the issue of large variance due to Importance Sampling, we propose the $Q$-trace algorithm for the critic, which is inspired by the V-trace algorithm (Espeholt et al., 2018). This enables us to explicitly control the bias and variance, and characterize the trade-off between them. As an advantage of off-policy sampling, a major feature of our result is that we do not need any additional assumptions, beyond the ergodicity of the Markov chain induced by the behavior policy.
APA
Khodadadian, S., Chen, Z. & Maguluri, S.T.. (2021). Finite-Sample Analysis of Off-Policy Natural Actor-Critic Algorithm. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:5420-5431 Available from https://proceedings.mlr.press/v139/khodadadian21a.html.

Related Material