Tight Verification of Probabilistic Robustness in Bayesian Neural Networks

Ben Batten, Mehran Hosseini, Alessio Lomuscio
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4906-4914, 2024.

Abstract

We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters’ space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters’ space for safe weights by using iterative expansion and the network’s gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-batten24a, title = {Tight Verification of Probabilistic Robustness in {B}ayesian Neural Networks}, author = {Batten, Ben and Hosseini, Mehran and Lomuscio, Alessio}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4906--4914}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/batten24a/batten24a.pdf}, url = {https://proceedings.mlr.press/v238/batten24a.html}, abstract = {We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters’ space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters’ space for safe weights by using iterative expansion and the network’s gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.} }
Endnote
%0 Conference Paper %T Tight Verification of Probabilistic Robustness in Bayesian Neural Networks %A Ben Batten %A Mehran Hosseini %A Alessio Lomuscio %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-batten24a %I PMLR %P 4906--4914 %U https://proceedings.mlr.press/v238/batten24a.html %V 238 %X We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters’ space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters’ space for safe weights by using iterative expansion and the network’s gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.
APA
Batten, B., Hosseini, M. & Lomuscio, A.. (2024). Tight Verification of Probabilistic Robustness in Bayesian Neural Networks. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4906-4914 Available from https://proceedings.mlr.press/v238/batten24a.html.

Related Material