VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees

Anahita Baninajjar, Ahmed Rezine, Amir Aminifar
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:2846-2856, 2024.

Abstract

Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-baninajjar24a, title = {{VNN}: Verification-Friendly Neural Networks with Hard Robustness Guarantees}, author = {Baninajjar, Anahita and Rezine, Ahmed and Aminifar, Amir}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {2846--2856}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/baninajjar24a/baninajjar24a.pdf}, url = {https://proceedings.mlr.press/v235/baninajjar24a.html}, abstract = {Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.} }
Endnote
%0 Conference Paper %T VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees %A Anahita Baninajjar %A Ahmed Rezine %A Amir Aminifar %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-baninajjar24a %I PMLR %P 2846--2856 %U https://proceedings.mlr.press/v235/baninajjar24a.html %V 235 %X Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.
APA
Baninajjar, A., Rezine, A. & Aminifar, A.. (2024). VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:2846-2856 Available from https://proceedings.mlr.press/v235/baninajjar24a.html.

Related Material