Bayesian Inference with Certifiable Adversarial Robustness

Matthew Wicker, Luca Laurenti, Andrea Patane, Zhuotong Chen, Zheng Zhang, Marta Kwiatkowska
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2431-2439, 2021.

Abstract

We consider adversarial training of deep neural networks through the lens of Bayesian learning and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on techniques from constraint relaxation of non-convex optimisation problems and modify the standard cross-entropy error model to enforce posterior robustness to worst-case perturbations in $\epsilon-$balls around input points. We illustrate how the resulting framework can be combined with methods commonly employed for approximate inference of BNNs. In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST, and CIFAR-10 and can also be beneficial for uncertainty calibration. Our method is the first to directly train certifiable BNNs, thus facilitating their deployment in safety-critical applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-wicker21a, title = { Bayesian Inference with Certifiable Adversarial Robustness }, author = {Wicker, Matthew and Laurenti, Luca and Patane, Andrea and Chen, Zhuotong and Zhang, Zheng and Kwiatkowska, Marta}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2431--2439}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/wicker21a/wicker21a.pdf}, url = {https://proceedings.mlr.press/v130/wicker21a.html}, abstract = { We consider adversarial training of deep neural networks through the lens of Bayesian learning and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on techniques from constraint relaxation of non-convex optimisation problems and modify the standard cross-entropy error model to enforce posterior robustness to worst-case perturbations in $\epsilon-$balls around input points. We illustrate how the resulting framework can be combined with methods commonly employed for approximate inference of BNNs. In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST, and CIFAR-10 and can also be beneficial for uncertainty calibration. Our method is the first to directly train certifiable BNNs, thus facilitating their deployment in safety-critical applications. } }
Endnote
%0 Conference Paper %T Bayesian Inference with Certifiable Adversarial Robustness %A Matthew Wicker %A Luca Laurenti %A Andrea Patane %A Zhuotong Chen %A Zheng Zhang %A Marta Kwiatkowska %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-wicker21a %I PMLR %P 2431--2439 %U https://proceedings.mlr.press/v130/wicker21a.html %V 130 %X We consider adversarial training of deep neural networks through the lens of Bayesian learning and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on techniques from constraint relaxation of non-convex optimisation problems and modify the standard cross-entropy error model to enforce posterior robustness to worst-case perturbations in $\epsilon-$balls around input points. We illustrate how the resulting framework can be combined with methods commonly employed for approximate inference of BNNs. In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST, and CIFAR-10 and can also be beneficial for uncertainty calibration. Our method is the first to directly train certifiable BNNs, thus facilitating their deployment in safety-critical applications.
APA
Wicker, M., Laurenti, L., Patane, A., Chen, Z., Zhang, Z. & Kwiatkowska, M.. (2021). Bayesian Inference with Certifiable Adversarial Robustness . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2431-2439 Available from https://proceedings.mlr.press/v130/wicker21a.html.

Related Material