Worst Cases Policy Gradients

Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov
Proceedings of the Conference on Robot Learning, PMLR 100:1078-1093, 2020.

Abstract

Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environment. When learning policies for safety critical applications, it is important to be sensitive to risks and avoid catastrophic events. Towards this goal, we propose an actor-critic framework which models the uncertainty of the future and simultaneously learns a policy based on that uncertainty model. Specifically, given a distribution of the future return for any state and action, we optimize policies for varying levels of conditional Value-at-Risk. The learned policy can map the same state to different actions depending on the propensity for risk. We demonstrate the effectiveness of our approach in the domain of driving simulations, where we learn maneuvers in two scenarios. Our learned controller can dynamically select actions along a continuous axis, where safe and conservative behaviors are found at one end while riskier behaviors are found at the other. Finally, when testing with very different simulation parameters, our risk-averse policies generalize significantly better compared to other reinforcement learning approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-tang20a, title = {Worst Cases Policy Gradients}, author = {Tang, Yichuan Charlie and Zhang, Jian and Salakhutdinov, Ruslan}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {1078--1093}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/tang20a/tang20a.pdf}, url = {https://proceedings.mlr.press/v100/tang20a.html}, abstract = {Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environment. When learning policies for safety critical applications, it is important to be sensitive to risks and avoid catastrophic events. Towards this goal, we propose an actor-critic framework which models the uncertainty of the future and simultaneously learns a policy based on that uncertainty model. Specifically, given a distribution of the future return for any state and action, we optimize policies for varying levels of conditional Value-at-Risk. The learned policy can map the same state to different actions depending on the propensity for risk. We demonstrate the effectiveness of our approach in the domain of driving simulations, where we learn maneuvers in two scenarios. Our learned controller can dynamically select actions along a continuous axis, where safe and conservative behaviors are found at one end while riskier behaviors are found at the other. Finally, when testing with very different simulation parameters, our risk-averse policies generalize significantly better compared to other reinforcement learning approaches.} }
Endnote
%0 Conference Paper %T Worst Cases Policy Gradients %A Yichuan Charlie Tang %A Jian Zhang %A Ruslan Salakhutdinov %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-tang20a %I PMLR %P 1078--1093 %U https://proceedings.mlr.press/v100/tang20a.html %V 100 %X Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environment. When learning policies for safety critical applications, it is important to be sensitive to risks and avoid catastrophic events. Towards this goal, we propose an actor-critic framework which models the uncertainty of the future and simultaneously learns a policy based on that uncertainty model. Specifically, given a distribution of the future return for any state and action, we optimize policies for varying levels of conditional Value-at-Risk. The learned policy can map the same state to different actions depending on the propensity for risk. We demonstrate the effectiveness of our approach in the domain of driving simulations, where we learn maneuvers in two scenarios. Our learned controller can dynamically select actions along a continuous axis, where safe and conservative behaviors are found at one end while riskier behaviors are found at the other. Finally, when testing with very different simulation parameters, our risk-averse policies generalize significantly better compared to other reinforcement learning approaches.
APA
Tang, Y.C., Zhang, J. & Salakhutdinov, R.. (2020). Worst Cases Policy Gradients. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:1078-1093 Available from https://proceedings.mlr.press/v100/tang20a.html.

Related Material