Additive Error Guarantees for Weighted Low Rank Approximation

Aditya Bhaskara, Aravinda Kanchana Ruwanpathirana, Maheshakya Wijewardena
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:874-883, 2021.

Abstract

Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a matrix $A$ with a low-rank matrix $L$ so as to minimize the error $\norm{A - L}_F^2$. However in many applications, approximating some entries is more important than others, which leads to the weighted low rank approximation problem. However, the addition of weights makes the low-rank approximation problem intractable. Thus many works have obtained efficient algorithms under additional structural assumptions on the weight matrix (such as low rank, and appropriate block structure). We study a natural greedy algorithm for weighted low rank approximation and develop a simple condition under which it yields bi-criteria approximation up to a small additive factor in the error. The algorithm involves iteratively computing the top singular vector of an appropriately varying matrix, and is thus easy to implement at scale. Our methods also allow us to study the problem of low rank approximation under $\ell_p$ norm error.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-bhaskara21a, title = {Additive Error Guarantees for Weighted Low Rank Approximation}, author = {Bhaskara, Aditya and Ruwanpathirana, Aravinda Kanchana and Wijewardena, Maheshakya}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {874--883}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bhaskara21a/bhaskara21a.pdf}, url = {https://proceedings.mlr.press/v139/bhaskara21a.html}, abstract = {Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a matrix $A$ with a low-rank matrix $L$ so as to minimize the error $\norm{A - L}_F^2$. However in many applications, approximating some entries is more important than others, which leads to the weighted low rank approximation problem. However, the addition of weights makes the low-rank approximation problem intractable. Thus many works have obtained efficient algorithms under additional structural assumptions on the weight matrix (such as low rank, and appropriate block structure). We study a natural greedy algorithm for weighted low rank approximation and develop a simple condition under which it yields bi-criteria approximation up to a small additive factor in the error. The algorithm involves iteratively computing the top singular vector of an appropriately varying matrix, and is thus easy to implement at scale. Our methods also allow us to study the problem of low rank approximation under $\ell_p$ norm error.} }
Endnote
%0 Conference Paper %T Additive Error Guarantees for Weighted Low Rank Approximation %A Aditya Bhaskara %A Aravinda Kanchana Ruwanpathirana %A Maheshakya Wijewardena %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bhaskara21a %I PMLR %P 874--883 %U https://proceedings.mlr.press/v139/bhaskara21a.html %V 139 %X Low-rank approximation is a classic tool in data analysis, where the goal is to approximate a matrix $A$ with a low-rank matrix $L$ so as to minimize the error $\norm{A - L}_F^2$. However in many applications, approximating some entries is more important than others, which leads to the weighted low rank approximation problem. However, the addition of weights makes the low-rank approximation problem intractable. Thus many works have obtained efficient algorithms under additional structural assumptions on the weight matrix (such as low rank, and appropriate block structure). We study a natural greedy algorithm for weighted low rank approximation and develop a simple condition under which it yields bi-criteria approximation up to a small additive factor in the error. The algorithm involves iteratively computing the top singular vector of an appropriately varying matrix, and is thus easy to implement at scale. Our methods also allow us to study the problem of low rank approximation under $\ell_p$ norm error.
APA
Bhaskara, A., Ruwanpathirana, A.K. & Wijewardena, M.. (2021). Additive Error Guarantees for Weighted Low Rank Approximation. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:874-883 Available from https://proceedings.mlr.press/v139/bhaskara21a.html.

Related Material