Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation

Zakhar Shumaylov, Jeremy Budd, Subhadip Mukherjee, Carola-Bibiane Schönlieb
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:45286-45314, 2024.

Abstract

Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the context of critical points as opposed to global minimisers. In this paper, we present a generalised formulation of convergent regularisation in terms of critical points, and show that this is achieved by a class of weakly convex regularisers. We prove convergence of the primal-dual hybrid gradient method for the associated variational problem, and, given a Kurdyka-Łojasiewicz condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally, applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks (IWCNN), and show empirically that IWCNNs can lead to improved performance of learned adversarial regularisers for computed tomography (CT) reconstruction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-shumaylov24a, title = {Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation}, author = {Shumaylov, Zakhar and Budd, Jeremy and Mukherjee, Subhadip and Sch\"{o}nlieb, Carola-Bibiane}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {45286--45314}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/shumaylov24a/shumaylov24a.pdf}, url = {https://proceedings.mlr.press/v235/shumaylov24a.html}, abstract = {Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the context of critical points as opposed to global minimisers. In this paper, we present a generalised formulation of convergent regularisation in terms of critical points, and show that this is achieved by a class of weakly convex regularisers. We prove convergence of the primal-dual hybrid gradient method for the associated variational problem, and, given a Kurdyka-Łojasiewicz condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally, applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks (IWCNN), and show empirically that IWCNNs can lead to improved performance of learned adversarial regularisers for computed tomography (CT) reconstruction.} }
Endnote
%0 Conference Paper %T Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation %A Zakhar Shumaylov %A Jeremy Budd %A Subhadip Mukherjee %A Carola-Bibiane Schönlieb %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-shumaylov24a %I PMLR %P 45286--45314 %U https://proceedings.mlr.press/v235/shumaylov24a.html %V 235 %X Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the context of critical points as opposed to global minimisers. In this paper, we present a generalised formulation of convergent regularisation in terms of critical points, and show that this is achieved by a class of weakly convex regularisers. We prove convergence of the primal-dual hybrid gradient method for the associated variational problem, and, given a Kurdyka-Łojasiewicz condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally, applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks (IWCNN), and show empirically that IWCNNs can lead to improved performance of learned adversarial regularisers for computed tomography (CT) reconstruction.
APA
Shumaylov, Z., Budd, J., Mukherjee, S. & Schönlieb, C.. (2024). Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:45286-45314 Available from https://proceedings.mlr.press/v235/shumaylov24a.html.

Related Material