GRAC: Self-Guided and Self-Regularized Actor-Critic

Lin Shao, Yifan You, Mengyuan Yan, Shenli Yuan, Qingyun Sun, Jeannette Bohg
Proceedings of the 5th Conference on Robot Learning, PMLR 164:267-276, 2022.

Abstract

Deep reinforcement learning (DRL) algorithms have successfully been demonstrated on a range of challenging decision making and control tasks. One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function. However, target networks can slow down the learning process due to delayed function updates. Our main contribution in this work is a self-regularized TD-learning method to address divergence without requiring a target network. Additionally, we propose a self-guided policy improvement method by combining policy-gradient with zero-order optimization to search for actions associated with higher Q-values in a broad neighborhood. This makes learning more robust to local noise in the Q function approximation and guides the updates of our actor network. Taken together, these components define GRAC, a novel self-guided and self-regularized actor-critic algorithm. We evaluate GRAC on the OpenAI gym tasks, outperforming state of the art on four tasks and achieving competitive results on two environments. We also apply GRAC to enable a non-anthropomorphic robotic hand to successfully accomplish an in-hand manipulation task in the real world.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-shao22a, title = {GRAC: Self-Guided and Self-Regularized Actor-Critic}, author = {Shao, Lin and You, Yifan and Yan, Mengyuan and Yuan, Shenli and Sun, Qingyun and Bohg, Jeannette}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {267--276}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/shao22a/shao22a.pdf}, url = {https://proceedings.mlr.press/v164/shao22a.html}, abstract = { Deep reinforcement learning (DRL) algorithms have successfully been demonstrated on a range of challenging decision making and control tasks. One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function. However, target networks can slow down the learning process due to delayed function updates. Our main contribution in this work is a self-regularized TD-learning method to address divergence without requiring a target network. Additionally, we propose a self-guided policy improvement method by combining policy-gradient with zero-order optimization to search for actions associated with higher Q-values in a broad neighborhood. This makes learning more robust to local noise in the Q function approximation and guides the updates of our actor network. Taken together, these components define GRAC, a novel self-guided and self-regularized actor-critic algorithm. We evaluate GRAC on the OpenAI gym tasks, outperforming state of the art on four tasks and achieving competitive results on two environments. We also apply GRAC to enable a non-anthropomorphic robotic hand to successfully accomplish an in-hand manipulation task in the real world.} }
Endnote
%0 Conference Paper %T GRAC: Self-Guided and Self-Regularized Actor-Critic %A Lin Shao %A Yifan You %A Mengyuan Yan %A Shenli Yuan %A Qingyun Sun %A Jeannette Bohg %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-shao22a %I PMLR %P 267--276 %U https://proceedings.mlr.press/v164/shao22a.html %V 164 %X Deep reinforcement learning (DRL) algorithms have successfully been demonstrated on a range of challenging decision making and control tasks. One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function. However, target networks can slow down the learning process due to delayed function updates. Our main contribution in this work is a self-regularized TD-learning method to address divergence without requiring a target network. Additionally, we propose a self-guided policy improvement method by combining policy-gradient with zero-order optimization to search for actions associated with higher Q-values in a broad neighborhood. This makes learning more robust to local noise in the Q function approximation and guides the updates of our actor network. Taken together, these components define GRAC, a novel self-guided and self-regularized actor-critic algorithm. We evaluate GRAC on the OpenAI gym tasks, outperforming state of the art on four tasks and achieving competitive results on two environments. We also apply GRAC to enable a non-anthropomorphic robotic hand to successfully accomplish an in-hand manipulation task in the real world.
APA
Shao, L., You, Y., Yan, M., Yuan, S., Sun, Q. & Bohg, J.. (2022). GRAC: Self-Guided and Self-Regularized Actor-Critic. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:267-276 Available from https://proceedings.mlr.press/v164/shao22a.html.

Related Material