Differentiable Abstract Interpretation for Provably Robust Neural Networks

Matthew Mirman, Timon Gehr, Martin Vechev
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3578-3586, 2018.

Abstract

We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-mirman18b, title = {Differentiable Abstract Interpretation for Provably Robust Neural Networks}, author = {Mirman, Matthew and Gehr, Timon and Vechev, Martin}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3578--3586}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/mirman18b/mirman18b.pdf}, url = {http://proceedings.mlr.press/v80/mirman18b.html}, abstract = {We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.} }
Endnote
%0 Conference Paper %T Differentiable Abstract Interpretation for Provably Robust Neural Networks %A Matthew Mirman %A Timon Gehr %A Martin Vechev %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-mirman18b %I PMLR %P 3578--3586 %U http://proceedings.mlr.press/v80/mirman18b.html %V 80 %X We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.
APA
Mirman, M., Gehr, T. & Vechev, M.. (2018). Differentiable Abstract Interpretation for Provably Robust Neural Networks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3578-3586 Available from http://proceedings.mlr.press/v80/mirman18b.html.

Related Material