Deep Reinforcement Learning with Robust and Smooth Policy

Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, Tuo Zhao
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8707-8718, 2020.

Abstract

Deep reinforcement learning (RL) has achieved great empirical successes in various domains. However, the large search space of neural networks requires a large amount of data, which makes the current RL algorithms not sample efficient. Motivated by the fact that many environments with continuous state space have smooth transitions, we propose to learn a smooth policy that behaves smoothly with respect to states. We develop a new framework — \textbf{S}mooth \textbf{R}egularized \textbf{R}einforcement \textbf{L}earning ($\textbf{SR}^2\textbf{L}$), where the policy is trained with smoothness-inducing regularization. Such regularization effectively constrains the search space, and enforces smoothness in the learned policy. Moreover, our proposed framework can also improve the robustness of policy against measurement error in the state space, and can be naturally extended to distribubutionally robust setting. We apply the proposed framework to both on-policy (TRPO) and off-policy algorithm (DDPG). Through extensive experiments, we demonstrate that our method achieves improved sample efficiency and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-shen20b, title = {Deep Reinforcement Learning with Robust and Smooth Policy}, author = {Shen, Qianli and Li, Yan and Jiang, Haoming and Wang, Zhaoran and Zhao, Tuo}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8707--8718}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/shen20b/shen20b.pdf}, url = {https://proceedings.mlr.press/v119/shen20b.html}, abstract = {Deep reinforcement learning (RL) has achieved great empirical successes in various domains. However, the large search space of neural networks requires a large amount of data, which makes the current RL algorithms not sample efficient. Motivated by the fact that many environments with continuous state space have smooth transitions, we propose to learn a smooth policy that behaves smoothly with respect to states. We develop a new framework — \textbf{S}mooth \textbf{R}egularized \textbf{R}einforcement \textbf{L}earning ($\textbf{SR}^2\textbf{L}$), where the policy is trained with smoothness-inducing regularization. Such regularization effectively constrains the search space, and enforces smoothness in the learned policy. Moreover, our proposed framework can also improve the robustness of policy against measurement error in the state space, and can be naturally extended to distribubutionally robust setting. We apply the proposed framework to both on-policy (TRPO) and off-policy algorithm (DDPG). Through extensive experiments, we demonstrate that our method achieves improved sample efficiency and robustness.} }
Endnote
%0 Conference Paper %T Deep Reinforcement Learning with Robust and Smooth Policy %A Qianli Shen %A Yan Li %A Haoming Jiang %A Zhaoran Wang %A Tuo Zhao %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-shen20b %I PMLR %P 8707--8718 %U https://proceedings.mlr.press/v119/shen20b.html %V 119 %X Deep reinforcement learning (RL) has achieved great empirical successes in various domains. However, the large search space of neural networks requires a large amount of data, which makes the current RL algorithms not sample efficient. Motivated by the fact that many environments with continuous state space have smooth transitions, we propose to learn a smooth policy that behaves smoothly with respect to states. We develop a new framework — \textbf{S}mooth \textbf{R}egularized \textbf{R}einforcement \textbf{L}earning ($\textbf{SR}^2\textbf{L}$), where the policy is trained with smoothness-inducing regularization. Such regularization effectively constrains the search space, and enforces smoothness in the learned policy. Moreover, our proposed framework can also improve the robustness of policy against measurement error in the state space, and can be naturally extended to distribubutionally robust setting. We apply the proposed framework to both on-policy (TRPO) and off-policy algorithm (DDPG). Through extensive experiments, we demonstrate that our method achieves improved sample efficiency and robustness.
APA
Shen, Q., Li, Y., Jiang, H., Wang, Z. & Zhao, T.. (2020). Deep Reinforcement Learning with Robust and Smooth Policy. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8707-8718 Available from https://proceedings.mlr.press/v119/shen20b.html.

Related Material