[edit]
Differentiable Abstract Interpretation for Provably Robust Neural Networks
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3578-3586, 2018.
Abstract
We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.