Probabilistic Safety for Bayesian Neural Networks

Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:1198-1207, 2020.

Abstract

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, $T \subseteq R^m$, we study the probability w.r.t. the BNN posterior that all the points in $T$ are mapped to the same region $S$ in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-wicker20a, title = {Probabilistic Safety for Bayesian Neural Networks}, author = {Wicker, Matthew and Laurenti, Luca and Patane, Andrea and Kwiatkowska, Marta}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {1198--1207}, year = {2020}, editor = {Peters, Jonas and Sontag, David}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/wicker20a/wicker20a.pdf}, url = {https://proceedings.mlr.press/v124/wicker20a.html}, abstract = {We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, $T \subseteq R^m$, we study the probability w.r.t. the BNN posterior that all the points in $T$ are mapped to the same region $S$ in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.} }
Endnote
%0 Conference Paper %T Probabilistic Safety for Bayesian Neural Networks %A Matthew Wicker %A Luca Laurenti %A Andrea Patane %A Marta Kwiatkowska %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-wicker20a %I PMLR %P 1198--1207 %U https://proceedings.mlr.press/v124/wicker20a.html %V 124 %X We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, $T \subseteq R^m$, we study the probability w.r.t. the BNN posterior that all the points in $T$ are mapped to the same region $S$ in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.
APA
Wicker, M., Laurenti, L., Patane, A. & Kwiatkowska, M.. (2020). Probabilistic Safety for Bayesian Neural Networks. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:1198-1207 Available from https://proceedings.mlr.press/v124/wicker20a.html.

Related Material