Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions

Ezgi Korkmaz, Jonah Brown-Cohen
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:17534-17543, 2023.

Abstract

Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design. However, this incline in complexity, and furthermore the increase in the dimensions of the observation came at the cost of volatility that can be taken advantage of via adversarial attacks (i.e. moving along worst-case directions in the observation space). To solve this policy instability problem we propose a novel method to detect the presence of these non-robust directions via local quadratic approximation of the deep neural policy loss. Our method provides a theoretical basis for the fundamental cut-off between safe observations and adversarial observations. Furthermore, our technique is computationally efficient, and does not depend on the methods used to produce the worst-case directions. We conduct extensive experiments in the Arcade Learning Environment with several different adversarial attack techniques. Most significantly, we demonstrate the effectiveness of our approach even in the setting where non-robust directions are explicitly optimized to circumvent our proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-korkmaz23a, title = {Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions}, author = {Korkmaz, Ezgi and Brown-Cohen, Jonah}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {17534--17543}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/korkmaz23a/korkmaz23a.pdf}, url = {https://proceedings.mlr.press/v202/korkmaz23a.html}, abstract = {Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design. However, this incline in complexity, and furthermore the increase in the dimensions of the observation came at the cost of volatility that can be taken advantage of via adversarial attacks (i.e. moving along worst-case directions in the observation space). To solve this policy instability problem we propose a novel method to detect the presence of these non-robust directions via local quadratic approximation of the deep neural policy loss. Our method provides a theoretical basis for the fundamental cut-off between safe observations and adversarial observations. Furthermore, our technique is computationally efficient, and does not depend on the methods used to produce the worst-case directions. We conduct extensive experiments in the Arcade Learning Environment with several different adversarial attack techniques. Most significantly, we demonstrate the effectiveness of our approach even in the setting where non-robust directions are explicitly optimized to circumvent our proposed method.} }
Endnote
%0 Conference Paper %T Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions %A Ezgi Korkmaz %A Jonah Brown-Cohen %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-korkmaz23a %I PMLR %P 17534--17543 %U https://proceedings.mlr.press/v202/korkmaz23a.html %V 202 %X Learning in MDPs with highly complex state representations is currently possible due to multiple advancements in reinforcement learning algorithm design. However, this incline in complexity, and furthermore the increase in the dimensions of the observation came at the cost of volatility that can be taken advantage of via adversarial attacks (i.e. moving along worst-case directions in the observation space). To solve this policy instability problem we propose a novel method to detect the presence of these non-robust directions via local quadratic approximation of the deep neural policy loss. Our method provides a theoretical basis for the fundamental cut-off between safe observations and adversarial observations. Furthermore, our technique is computationally efficient, and does not depend on the methods used to produce the worst-case directions. We conduct extensive experiments in the Arcade Learning Environment with several different adversarial attack techniques. Most significantly, we demonstrate the effectiveness of our approach even in the setting where non-robust directions are explicitly optimized to circumvent our proposed method.
APA
Korkmaz, E. & Brown-Cohen, J.. (2023). Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:17534-17543 Available from https://proceedings.mlr.press/v202/korkmaz23a.html.

Related Material