Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

Bo-Han Lai, Pin-Han Huang, Bo-Han Kung, Shang-Tse Chen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32246-32277, 2025.

Abstract

Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet — a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at GitHub Link.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lai25c, title = {Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss}, author = {Lai, Bo-Han and Huang, Pin-Han and Kung, Bo-Han and Chen, Shang-Tse}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {32246--32277}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lai25c/lai25c.pdf}, url = {https://proceedings.mlr.press/v267/lai25c.html}, abstract = {Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet — a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at GitHub Link.} }
Endnote
%0 Conference Paper %T Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss %A Bo-Han Lai %A Pin-Han Huang %A Bo-Han Kung %A Shang-Tse Chen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lai25c %I PMLR %P 32246--32277 %U https://proceedings.mlr.press/v267/lai25c.html %V 267 %X Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet — a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at GitHub Link.
APA
Lai, B., Huang, P., Kung, B. & Chen, S.. (2025). Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:32246-32277 Available from https://proceedings.mlr.press/v267/lai25c.html.

Related Material