PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach

Lily Weng, Pin-Yu Chen, Lam Nguyen, Mark Squillante, Akhilan Boopathy, Ivan Oseledets, Luca Daniel
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6727-6736, 2019.

Abstract

We propose a novel framework PROVEN to \textbf{PRO}babilistically \textbf{VE}rify \textbf{N}eural network’s robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around $1.8 \times$ and $3.5 \times$ with at least a $99.99%$ confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-weng19a, title = {{PROVEN}: Verifying Robustness of Neural Networks with a Probabilistic Approach}, author = {Weng, Lily and Chen, Pin-Yu and Nguyen, Lam and Squillante, Mark and Boopathy, Akhilan and Oseledets, Ivan and Daniel, Luca}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6727--6736}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/weng19a/weng19a.pdf}, url = {https://proceedings.mlr.press/v97/weng19a.html}, abstract = {We propose a novel framework PROVEN to \textbf{PRO}babilistically \textbf{VE}rify \textbf{N}eural network’s robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around $1.8 \times$ and $3.5 \times$ with at least a $99.99%$ confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.} }
Endnote
%0 Conference Paper %T PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach %A Lily Weng %A Pin-Yu Chen %A Lam Nguyen %A Mark Squillante %A Akhilan Boopathy %A Ivan Oseledets %A Luca Daniel %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-weng19a %I PMLR %P 6727--6736 %U https://proceedings.mlr.press/v97/weng19a.html %V 97 %X We propose a novel framework PROVEN to \textbf{PRO}babilistically \textbf{VE}rify \textbf{N}eural network’s robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around $1.8 \times$ and $3.5 \times$ with at least a $99.99%$ confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.
APA
Weng, L., Chen, P., Nguyen, L., Squillante, M., Boopathy, A., Oseledets, I. & Daniel, L.. (2019). PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6727-6736 Available from https://proceedings.mlr.press/v97/weng19a.html.

Related Material