NonConvex Matrix Completion Against a SemiRandom Adversary
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:13621394, 2018.
Abstract
Matrix completion is a wellstudied problem with many machine learning applications. In practice, the problem is often solved by nonconvex optimization algorithms. However, the current theoretical analysis for nonconvex algorithms relies crucially on the assumption that each entry of the matrix is observed with exactly the same probability $p$, which is not realistic in practice. In this paper, we investigate a more realistic semirandom model, where the probability of observing each entry is {\em at least} $p$. Even with this mild semirandom perturbation, we can construct counterexamples where existing nonconvex algorithms get stuck in bad local optima. In light of the negative results, we propose a preprocessing step that tries to reweight the semirandom input, so that it becomes “similar” to a random input. We give a nearlylinear time algorithm for this problem, and show that after our preprocessing, all the local minima of the nonconvex objective can be used to approximately recover the underlying groundtruth matrix.
Related Material


