Identifying untrustworthy predictions in neural networks by geometric gradient analysis

Leo Schwinn, An Nguyen, René Raab, Leon Bungert, Daniel Tenbrinck, Dario Zanca, Martin Burger, Bjoern Eskofier
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:854-864, 2021.

Abstract

The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications. Most existing methods either require a retraining of a given model to achieve robust identification of adversarial attacks or are limited to out-of-distribution sample detection only. In this work, we propose a geometric gradient analysis (GGA) to improve the identification of untrustworthy predictions without retraining of a given model. GGA analyzes the geometry of the loss landscape of neural networks based on the saliency maps of their respective input. We observe considerable differences between the input gradient geometry of trustworthy and untrustworthy predictions. Using these differences, GGA outperforms prior approaches in detecting OOD data and adversarial attacks, including state-of-the-art and adaptive attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-schwinn21a, title = {Identifying untrustworthy predictions in neural networks by geometric gradient analysis}, author = {Schwinn, Leo and Nguyen, An and Raab, Ren\'e and Bungert, Leon and Tenbrinck, Daniel and Zanca, Dario and Burger, Martin and Eskofier, Bjoern}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {854--864}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/schwinn21a/schwinn21a.pdf}, url = {https://proceedings.mlr.press/v161/schwinn21a.html}, abstract = {The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications. Most existing methods either require a retraining of a given model to achieve robust identification of adversarial attacks or are limited to out-of-distribution sample detection only. In this work, we propose a geometric gradient analysis (GGA) to improve the identification of untrustworthy predictions without retraining of a given model. GGA analyzes the geometry of the loss landscape of neural networks based on the saliency maps of their respective input. We observe considerable differences between the input gradient geometry of trustworthy and untrustworthy predictions. Using these differences, GGA outperforms prior approaches in detecting OOD data and adversarial attacks, including state-of-the-art and adaptive attacks.} }
Endnote
%0 Conference Paper %T Identifying untrustworthy predictions in neural networks by geometric gradient analysis %A Leo Schwinn %A An Nguyen %A René Raab %A Leon Bungert %A Daniel Tenbrinck %A Dario Zanca %A Martin Burger %A Bjoern Eskofier %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-schwinn21a %I PMLR %P 854--864 %U https://proceedings.mlr.press/v161/schwinn21a.html %V 161 %X The susceptibility of deep neural networks to untrustworthy predictions, including out-of-distribution (OOD) data and adversarial examples, still prevent their widespread use in safety-critical applications. Most existing methods either require a retraining of a given model to achieve robust identification of adversarial attacks or are limited to out-of-distribution sample detection only. In this work, we propose a geometric gradient analysis (GGA) to improve the identification of untrustworthy predictions without retraining of a given model. GGA analyzes the geometry of the loss landscape of neural networks based on the saliency maps of their respective input. We observe considerable differences between the input gradient geometry of trustworthy and untrustworthy predictions. Using these differences, GGA outperforms prior approaches in detecting OOD data and adversarial attacks, including state-of-the-art and adaptive attacks.
APA
Schwinn, L., Nguyen, A., Raab, R., Bungert, L., Tenbrinck, D., Zanca, D., Burger, M. & Eskofier, B.. (2021). Identifying untrustworthy predictions in neural networks by geometric gradient analysis. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:854-864 Available from https://proceedings.mlr.press/v161/schwinn21a.html.

Related Material