Gradient descent in matrix factorization: Understanding large initialization

Hengchao Chen, Xin Chen, Mohamad Elmasri, Qiang Sun
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:619-647, 2024.

Abstract

Gradient Descent (GD) has been proven effective in solving various matrix factorization problems. However, its optimization behavior with large initial values remains less understood. To address this gap, this paper presents a novel theoretical framework for examining the convergence trajectory of GD with a large initialization. The framework is grounded in signal-to-noise ratio concepts and inductive arguments. The results uncover an implicit incremental learning phenomenon in GD and offer a deeper understanding of its performance in large initialization scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-chen24a, title = {Gradient descent in matrix factorization: Understanding large initialization}, author = {Chen, Hengchao and Chen, Xin and Elmasri, Mohamad and Sun, Qiang}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {619--647}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/chen24a/chen24a.pdf}, url = {https://proceedings.mlr.press/v244/chen24a.html}, abstract = {Gradient Descent (GD) has been proven effective in solving various matrix factorization problems. However, its optimization behavior with large initial values remains less understood. To address this gap, this paper presents a novel theoretical framework for examining the convergence trajectory of GD with a large initialization. The framework is grounded in signal-to-noise ratio concepts and inductive arguments. The results uncover an implicit incremental learning phenomenon in GD and offer a deeper understanding of its performance in large initialization scenarios.} }
Endnote
%0 Conference Paper %T Gradient descent in matrix factorization: Understanding large initialization %A Hengchao Chen %A Xin Chen %A Mohamad Elmasri %A Qiang Sun %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-chen24a %I PMLR %P 619--647 %U https://proceedings.mlr.press/v244/chen24a.html %V 244 %X Gradient Descent (GD) has been proven effective in solving various matrix factorization problems. However, its optimization behavior with large initial values remains less understood. To address this gap, this paper presents a novel theoretical framework for examining the convergence trajectory of GD with a large initialization. The framework is grounded in signal-to-noise ratio concepts and inductive arguments. The results uncover an implicit incremental learning phenomenon in GD and offer a deeper understanding of its performance in large initialization scenarios.
APA
Chen, H., Chen, X., Elmasri, M. & Sun, Q.. (2024). Gradient descent in matrix factorization: Understanding large initialization. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:619-647 Available from https://proceedings.mlr.press/v244/chen24a.html.

Related Material