Certifiably Quantisation-Robust training and inference of Neural Networks

Hue Dang, Matthew Robert Wicker, Goetz Botterweck, Andrea Patane
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5104-5112, 2025.

Abstract

We tackle the problem of computing guarantees for the robustness of neural networks against quantisation of their inputs, parameters and activation values. In particular, we pose the problem of bounding the worst-case discrepancy between the original neural network and all possible quantised ones parametrised by a given maximum quantisation diameter $\epsilon > 0$ over a finite dataset. To achieve this, we first reformulate the problem in terms of bilinear optimisation, which can be solved for provable bounds on the robustness guarantee. We then show how a quick scheme based on interval bound propagation can be developed and implemented during training so to allow for the learning of neural networks robust against a continuous family of quantisation techniques. We evaluated our methodology on a variety of architectures on datasets such as MNIST, F-MNIST and CIFAR10. We demonstrate how non-trivial bounds on guaranteed accuracy can be obtained on several architectures and how quantisation robustness can be significantly improved through robust training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-dang25a, title = {Certifiably Quantisation-Robust training and inference of Neural Networks}, author = {Dang, Hue and Wicker, Matthew Robert and Botterweck, Goetz and Patane, Andrea}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5104--5112}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/dang25a/dang25a.pdf}, url = {https://proceedings.mlr.press/v258/dang25a.html}, abstract = {We tackle the problem of computing guarantees for the robustness of neural networks against quantisation of their inputs, parameters and activation values. In particular, we pose the problem of bounding the worst-case discrepancy between the original neural network and all possible quantised ones parametrised by a given maximum quantisation diameter $\epsilon > 0$ over a finite dataset. To achieve this, we first reformulate the problem in terms of bilinear optimisation, which can be solved for provable bounds on the robustness guarantee. We then show how a quick scheme based on interval bound propagation can be developed and implemented during training so to allow for the learning of neural networks robust against a continuous family of quantisation techniques. We evaluated our methodology on a variety of architectures on datasets such as MNIST, F-MNIST and CIFAR10. We demonstrate how non-trivial bounds on guaranteed accuracy can be obtained on several architectures and how quantisation robustness can be significantly improved through robust training.} }
Endnote
%0 Conference Paper %T Certifiably Quantisation-Robust training and inference of Neural Networks %A Hue Dang %A Matthew Robert Wicker %A Goetz Botterweck %A Andrea Patane %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-dang25a %I PMLR %P 5104--5112 %U https://proceedings.mlr.press/v258/dang25a.html %V 258 %X We tackle the problem of computing guarantees for the robustness of neural networks against quantisation of their inputs, parameters and activation values. In particular, we pose the problem of bounding the worst-case discrepancy between the original neural network and all possible quantised ones parametrised by a given maximum quantisation diameter $\epsilon > 0$ over a finite dataset. To achieve this, we first reformulate the problem in terms of bilinear optimisation, which can be solved for provable bounds on the robustness guarantee. We then show how a quick scheme based on interval bound propagation can be developed and implemented during training so to allow for the learning of neural networks robust against a continuous family of quantisation techniques. We evaluated our methodology on a variety of architectures on datasets such as MNIST, F-MNIST and CIFAR10. We demonstrate how non-trivial bounds on guaranteed accuracy can be obtained on several architectures and how quantisation robustness can be significantly improved through robust training.
APA
Dang, H., Wicker, M.R., Botterweck, G. & Patane, A.. (2025). Certifiably Quantisation-Robust training and inference of Neural Networks. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5104-5112 Available from https://proceedings.mlr.press/v258/dang25a.html.

Related Material