[edit]
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12368-12379, 2021.
Abstract
It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small ℓ∞-norm bounded adversarial perturbations. Although many attempts have been made, most previous works either can only provide empirical verification of the defense to a particular attack method, or can only develop a certified guarantee of the model robustness in limited scenarios. In this paper, we seek for a new approach to develop a theoretically principled neural network that inherently resists ℓ∞ perturbations. In particular, we design a novel neuron that uses ℓ∞-distance as its basic operation (which we call ℓ∞-dist neuron), and show that any neural network constructed with ℓ∞-dist neurons (called ℓ∞-dist net) is naturally a 1-Lipschitz function with respect to ℓ∞-norm. This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs. We then prove that such networks have enough expressive power to approximate any 1-Lipschitz function with robust generalization guarantee. We further provide a holistic training strategy that can greatly alleviate optimization difficulties. Experimental results show that using ℓ∞-dist nets as basic building blocks, we consistently achieve state-of-the-art performance on commonly used datasets: 93.09% certified accuracy on MNIST (ϵ=0.3), 35.42% on CIFAR-10 (ϵ=8/255) and 16.31% on TinyImageNet (ϵ=1/255).