Fundamental Limits of Non-Linear Low-Rank Matrix Estimation

Pierre Mergny, Justin Ko, Florent Krzakala, Lenka Zdeborová
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3873-3873, 2024.

Abstract

We consider the task of estimating a low-rank matrix from non-linear and noisy observations. We prove a strong universality result showing that Bayes-optimal performances are characterized by an equivalent Gaussian model with an effective prior, whose parameters are entirely determined by an expansion of the non-linear function. In particular, we show that to reconstruct the signal accurately, one requires a signal-to-noise ratio growing as \(N^{\frac 12 (1-1/k_F)}\), where \(k_F\){is} the first non-zero Fisher information coefficient of the function. We provide asymptotic characterization for the minimal achievable mean squared error (MMSE) and an approximate message-passing algorithm that reaches the MMSE under conditions analogous to the linear version of the problem. We also provide asymptotic errors achieved by methods such as principal component analysis combined with Bayesian denoising, and compare them with Bayes-optimal MMSE.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-mergny24a, title = {Fundamental Limits of Non-Linear Low-Rank Matrix Estimation}, author = {Mergny, Pierre and Ko, Justin and Krzakala, Florent and Zdeborov{\'a}, Lenka}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3873--3873}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/mergny24a/mergny24a.pdf}, url = {https://proceedings.mlr.press/v247/mergny24a.html}, abstract = {We consider the task of estimating a low-rank matrix from non-linear and noisy observations. We prove a strong universality result showing that Bayes-optimal performances are characterized by an equivalent Gaussian model with an effective prior, whose parameters are entirely determined by an expansion of the non-linear function. In particular, we show that to reconstruct the signal accurately, one requires a signal-to-noise ratio growing as \(N^{\frac 12 (1-1/k_F)}\), where \(k_F\){is} the first non-zero Fisher information coefficient of the function. We provide asymptotic characterization for the minimal achievable mean squared error (MMSE) and an approximate message-passing algorithm that reaches the MMSE under conditions analogous to the linear version of the problem. We also provide asymptotic errors achieved by methods such as principal component analysis combined with Bayesian denoising, and compare them with Bayes-optimal MMSE. } }
Endnote
%0 Conference Paper %T Fundamental Limits of Non-Linear Low-Rank Matrix Estimation %A Pierre Mergny %A Justin Ko %A Florent Krzakala %A Lenka Zdeborová %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-mergny24a %I PMLR %P 3873--3873 %U https://proceedings.mlr.press/v247/mergny24a.html %V 247 %X We consider the task of estimating a low-rank matrix from non-linear and noisy observations. We prove a strong universality result showing that Bayes-optimal performances are characterized by an equivalent Gaussian model with an effective prior, whose parameters are entirely determined by an expansion of the non-linear function. In particular, we show that to reconstruct the signal accurately, one requires a signal-to-noise ratio growing as \(N^{\frac 12 (1-1/k_F)}\), where \(k_F\){is} the first non-zero Fisher information coefficient of the function. We provide asymptotic characterization for the minimal achievable mean squared error (MMSE) and an approximate message-passing algorithm that reaches the MMSE under conditions analogous to the linear version of the problem. We also provide asymptotic errors achieved by methods such as principal component analysis combined with Bayesian denoising, and compare them with Bayes-optimal MMSE.
APA
Mergny, P., Ko, J., Krzakala, F. & Zdeborová, L.. (2024). Fundamental Limits of Non-Linear Low-Rank Matrix Estimation. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3873-3873 Available from https://proceedings.mlr.press/v247/mergny24a.html.

Related Material