From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

Ali Joundi, Yann Traonmilin, Jean-François Aujol
Conference on Parsimony and Learning, PMLR 328:83-103, 2026.

Abstract

We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v328-joundi26a, title = {From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent}, author = {Joundi, Ali and Traonmilin, Yann and Aujol, Jean-Fran\c{c}ois}, booktitle = {Conference on Parsimony and Learning}, pages = {83--103}, year = {2026}, editor = {Burkholz, Rebekka and Liu, Shiwei and Ravishankar, Saiprasad and Redman, William and Huang, Wei and Su, Weijie and Zhu, Zhihui}, volume = {328}, series = {Proceedings of Machine Learning Research}, month = {23--26 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v328/main/assets/joundi26a/joundi26a.pdf}, url = {https://proceedings.mlr.press/v328/joundi26a.html}, abstract = {We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.} }
Endnote
%0 Conference Paper %T From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent %A Ali Joundi %A Yann Traonmilin %A Jean-François Aujol %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2026 %E Rebekka Burkholz %E Shiwei Liu %E Saiprasad Ravishankar %E William Redman %E Wei Huang %E Weijie Su %E Zhihui Zhu %F pmlr-v328-joundi26a %I PMLR %P 83--103 %U https://proceedings.mlr.press/v328/joundi26a.html %V 328 %X We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.
APA
Joundi, A., Traonmilin, Y. & Aujol, J.. (2026). From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 328:83-103 Available from https://proceedings.mlr.press/v328/joundi26a.html.

Related Material