[edit]

# Fundamental Limits of Non-Linear Low-Rank Matrix Estimation

*Proceedings of Thirty Seventh Conference on Learning Theory*, PMLR 247:3873-3873, 2024.

#### Abstract

We consider the task of estimating a low-rank matrix from non-linear and noisy observations. We prove a strong universality result showing that Bayes-optimal performances are characterized by an equivalent Gaussian model with an effective prior, whose parameters are entirely determined by an expansion of the non-linear function. In particular, we show that to reconstruct the signal accurately, one requires a signal-to-noise ratio growing as \(N^{\frac 12 (1-1/k_F)}\), where \(k_F\){is} the first non-zero Fisher information coefficient of the function. We provide asymptotic characterization for the minimal achievable mean squared error (MMSE) and an approximate message-passing algorithm that reaches the MMSE under conditions analogous to the linear version of the problem. We also provide asymptotic errors achieved by methods such as principal component analysis combined with Bayesian denoising, and compare them with Bayes-optimal MMSE.