Provable Robustness of ReLU networks via Maximization of Linear Regions

Francesco Croce, Maksym Andriushchenko, Matthias Hein
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2057-2066, 2019.

Abstract

It has been shown that neural network classifiers are not robust. This raises concerns about their usage in safety-critical systems. We propose in this paper a regularization scheme for ReLU networks which provably improves the robustness of the classifier by maximizing the linear regions of the classifier as well as the distance to the decision boundary. Using our regularization we can even find the minimal adversarial perturbation for a certain fraction of test points for large networks. In the experiments we show that our approach improves upon pure adversarial training both in terms of lower and upper bounds on the robustness and is comparable or better than the state of the art in terms of test error and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-croce19a, title = {Provable Robustness of ReLU networks via Maximization of Linear Regions}, author = {Croce, Francesco and Andriushchenko, Maksym and Hein, Matthias}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {2057--2066}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/croce19a/croce19a.pdf}, url = {https://proceedings.mlr.press/v89/croce19a.html}, abstract = {It has been shown that neural network classifiers are not robust. This raises concerns about their usage in safety-critical systems. We propose in this paper a regularization scheme for ReLU networks which provably improves the robustness of the classifier by maximizing the linear regions of the classifier as well as the distance to the decision boundary. Using our regularization we can even find the minimal adversarial perturbation for a certain fraction of test points for large networks. In the experiments we show that our approach improves upon pure adversarial training both in terms of lower and upper bounds on the robustness and is comparable or better than the state of the art in terms of test error and robustness.} }
Endnote
%0 Conference Paper %T Provable Robustness of ReLU networks via Maximization of Linear Regions %A Francesco Croce %A Maksym Andriushchenko %A Matthias Hein %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-croce19a %I PMLR %P 2057--2066 %U https://proceedings.mlr.press/v89/croce19a.html %V 89 %X It has been shown that neural network classifiers are not robust. This raises concerns about their usage in safety-critical systems. We propose in this paper a regularization scheme for ReLU networks which provably improves the robustness of the classifier by maximizing the linear regions of the classifier as well as the distance to the decision boundary. Using our regularization we can even find the minimal adversarial perturbation for a certain fraction of test points for large networks. In the experiments we show that our approach improves upon pure adversarial training both in terms of lower and upper bounds on the robustness and is comparable or better than the state of the art in terms of test error and robustness.
APA
Croce, F., Andriushchenko, M. & Hein, M.. (2019). Provable Robustness of ReLU networks via Maximization of Linear Regions. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:2057-2066 Available from https://proceedings.mlr.press/v89/croce19a.html.

Related Material