Certifying Ensembles: A General Certification Theory with S-Lipschitzness

Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip Torr, Adel Bibi
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27709-27736, 2023.

Abstract

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has been shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-petrov23a, title = {Certifying Ensembles: A General Certification Theory with S-Lipschitzness}, author = {Petrov, Aleksandar and Eiras, Francisco and Sanyal, Amartya and Torr, Philip and Bibi, Adel}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {27709--27736}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/petrov23a/petrov23a.pdf}, url = {https://proceedings.mlr.press/v202/petrov23a.html}, abstract = {Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has been shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.} }
Endnote
%0 Conference Paper %T Certifying Ensembles: A General Certification Theory with S-Lipschitzness %A Aleksandar Petrov %A Francisco Eiras %A Amartya Sanyal %A Philip Torr %A Adel Bibi %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-petrov23a %I PMLR %P 27709--27736 %U https://proceedings.mlr.press/v202/petrov23a.html %V 202 %X Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has been shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.
APA
Petrov, A., Eiras, F., Sanyal, A., Torr, P. & Bibi, A.. (2023). Certifying Ensembles: A General Certification Theory with S-Lipschitzness. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:27709-27736 Available from https://proceedings.mlr.press/v202/petrov23a.html.

Related Material