Low-Rank Matrix Approximation with Stability

Dongsheng Li, Chao Chen, Qin Lv, Junchi Yan, Li Shang, Stephen Chu
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:295-303, 2016.

Abstract

Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability – small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-of-the-art low-rank matrix approximation methods and ensemble methods in recommendation task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-lib16, title = {Low-Rank Matrix Approximation with Stability}, author = {Li, Dongsheng and Chen, Chao and Lv, Qin and Yan, Junchi and Shang, Li and Chu, Stephen}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {295--303}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/lib16.pdf}, url = {https://proceedings.mlr.press/v48/lib16.html}, abstract = {Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability – small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-of-the-art low-rank matrix approximation methods and ensemble methods in recommendation task.} }
Endnote
%0 Conference Paper %T Low-Rank Matrix Approximation with Stability %A Dongsheng Li %A Chao Chen %A Qin Lv %A Junchi Yan %A Li Shang %A Stephen Chu %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-lib16 %I PMLR %P 295--303 %U https://proceedings.mlr.press/v48/lib16.html %V 48 %X Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability – small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-of-the-art low-rank matrix approximation methods and ensemble methods in recommendation task.
RIS
TY - CPAPER TI - Low-Rank Matrix Approximation with Stability AU - Dongsheng Li AU - Chao Chen AU - Qin Lv AU - Junchi Yan AU - Li Shang AU - Stephen Chu BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-lib16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 295 EP - 303 L1 - http://proceedings.mlr.press/v48/lib16.pdf UR - https://proceedings.mlr.press/v48/lib16.html AB - Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability – small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-of-the-art low-rank matrix approximation methods and ensemble methods in recommendation task. ER -
APA
Li, D., Chen, C., Lv, Q., Yan, J., Shang, L. & Chu, S.. (2016). Low-Rank Matrix Approximation with Stability. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:295-303 Available from https://proceedings.mlr.press/v48/lib16.html.

Related Material