Off-Policy Average Reward Actor-Critic with Deterministic Policy Search

Naman Saxena, Subhojyoti Khastagir, Shishir Kolathaya, Shalabh Bhatnagar
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:30130-30203, 2023.

Abstract

The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-saxena23a, title = {Off-Policy Average Reward Actor-Critic with Deterministic Policy Search}, author = {Saxena, Naman and Khastagir, Subhojyoti and Kolathaya, Shishir and Bhatnagar, Shalabh}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {30130--30203}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/saxena23a/saxena23a.pdf}, url = {https://proceedings.mlr.press/v202/saxena23a.html}, abstract = {The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.} }
Endnote
%0 Conference Paper %T Off-Policy Average Reward Actor-Critic with Deterministic Policy Search %A Naman Saxena %A Subhojyoti Khastagir %A Shishir Kolathaya %A Shalabh Bhatnagar %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-saxena23a %I PMLR %P 30130--30203 %U https://proceedings.mlr.press/v202/saxena23a.html %V 202 %X The average reward criterion is relatively less studied as most existing works in the Reinforcement Learning literature consider the discounted reward criterion. There are few recent works that present on-policy average reward actor-critic algorithms, but average reward off-policy actor-critic is relatively less explored. In this work, we present both on-policy and off-policy deterministic policy gradient theorems for the average reward performance criterion. Using these theorems, we also present an Average Reward Off-Policy Deep Deterministic Policy Gradient (ARO-DDPG) Algorithm. We first show asymptotic convergence analysis using the ODE-based method. Subsequently, we provide a finite time analysis of the resulting stochastic approximation scheme with linear function approximator and obtain an $\epsilon$-optimal stationary policy with a sample complexity of $\Omega(\epsilon^{-2.5})$. We compare the average reward performance of our proposed ARO-DDPG algorithm and observe better empirical performance compared to state-of-the-art on-policy average reward actor-critic algorithms over MuJoCo-based environments.
APA
Saxena, N., Khastagir, S., Kolathaya, S. & Bhatnagar, S.. (2023). Off-Policy Average Reward Actor-Critic with Deterministic Policy Search. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:30130-30203 Available from https://proceedings.mlr.press/v202/saxena23a.html.

Related Material