Non-Convex Matrix Completion Against a Semi-Random Adversary

Yu Cheng, Rong Ge
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1362-1394, 2018.

Abstract

Matrix completion is a well-studied problem with many machine learning applications. In practice, the problem is often solved by non-convex optimization algorithms. However, the current theoretical analysis for non-convex algorithms relies crucially on the assumption that each entry of the matrix is observed with exactly the same probability p, which is not realistic in practice. In this paper, we investigate a more realistic semi-random model, where the probability of observing each entry is {\em at least} p. Even with this mild semi-random perturbation, we can construct counter-examples where existing non-convex algorithms get stuck in bad local optima. In light of the negative results, we propose a pre-processing step that tries to re-weight the semi-random input, so that it becomes “similar” to a random input. We give a nearly-linear time algorithm for this problem, and show that after our pre-processing, all the local minima of the non-convex objective can be used to approximately recover the underlying ground-truth matrix.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-cheng18b, title = {Non-Convex Matrix Completion Against a Semi-Random Adversary}, author = {Cheng, Yu and Ge, Rong}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {1362--1394}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/cheng18b/cheng18b.pdf}, url = {https://proceedings.mlr.press/v75/cheng18b.html}, abstract = { Matrix completion is a well-studied problem with many machine learning applications. In practice, the problem is often solved by non-convex optimization algorithms. However, the current theoretical analysis for non-convex algorithms relies crucially on the assumption that each entry of the matrix is observed with exactly the same probability $p$, which is not realistic in practice. In this paper, we investigate a more realistic semi-random model, where the probability of observing each entry is {\em at least} $p$. Even with this mild semi-random perturbation, we can construct counter-examples where existing non-convex algorithms get stuck in bad local optima. In light of the negative results, we propose a pre-processing step that tries to re-weight the semi-random input, so that it becomes “similar” to a random input. We give a nearly-linear time algorithm for this problem, and show that after our pre-processing, all the local minima of the non-convex objective can be used to approximately recover the underlying ground-truth matrix. } }
Endnote
%0 Conference Paper %T Non-Convex Matrix Completion Against a Semi-Random Adversary %A Yu Cheng %A Rong Ge %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-cheng18b %I PMLR %P 1362--1394 %U https://proceedings.mlr.press/v75/cheng18b.html %V 75 %X Matrix completion is a well-studied problem with many machine learning applications. In practice, the problem is often solved by non-convex optimization algorithms. However, the current theoretical analysis for non-convex algorithms relies crucially on the assumption that each entry of the matrix is observed with exactly the same probability $p$, which is not realistic in practice. In this paper, we investigate a more realistic semi-random model, where the probability of observing each entry is {\em at least} $p$. Even with this mild semi-random perturbation, we can construct counter-examples where existing non-convex algorithms get stuck in bad local optima. In light of the negative results, we propose a pre-processing step that tries to re-weight the semi-random input, so that it becomes “similar” to a random input. We give a nearly-linear time algorithm for this problem, and show that after our pre-processing, all the local minima of the non-convex objective can be used to approximately recover the underlying ground-truth matrix.
APA
Cheng, Y. & Ge, R.. (2018). Non-Convex Matrix Completion Against a Semi-Random Adversary. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:1362-1394 Available from https://proceedings.mlr.press/v75/cheng18b.html.

Related Material