Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration

Blaise Delattre, Quentin Barthélemy, Alexandre Araujo, Alexandre Allauzen
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:7513-7532, 2023.

Abstract

Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory and a new alternative to the Power iteration. Called the Gram iteration, our approach exhibits a superlinear convergence. First, we show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability. Then, it proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-delattre23a, title = {Efficient Bound of {L}ipschitz Constant for Convolutional Layers by {G}ram Iteration}, author = {Delattre, Blaise and Barth\'{e}lemy, Quentin and Araujo, Alexandre and Allauzen, Alexandre}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {7513--7532}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/delattre23a/delattre23a.pdf}, url = {https://proceedings.mlr.press/v202/delattre23a.html}, abstract = {Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory and a new alternative to the Power iteration. Called the Gram iteration, our approach exhibits a superlinear convergence. First, we show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability. Then, it proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.} }
Endnote
%0 Conference Paper %T Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration %A Blaise Delattre %A Quentin Barthélemy %A Alexandre Araujo %A Alexandre Allauzen %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-delattre23a %I PMLR %P 7513--7532 %U https://proceedings.mlr.press/v202/delattre23a.html %V 202 %X Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge. In this paper we introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory and a new alternative to the Power iteration. Called the Gram iteration, our approach exhibits a superlinear convergence. First, we show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability. Then, it proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
APA
Delattre, B., Barthélemy, Q., Araujo, A. & Allauzen, A.. (2023). Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:7513-7532 Available from https://proceedings.mlr.press/v202/delattre23a.html.

Related Material