Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation

Yuanhao Pu, Xiaolong Chen, Xu Huang, Jin Chen, Defu Lian, Enhong Chen
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41183-41203, 2024.

Abstract

The weighted squared loss is a common component in several Collaborative Filtering (CF) algorithms for item recommendation, including the representative implicit Alternating Least Squares (iALS). Despite its widespread use, this loss function lacks a clear connection to ranking objectives such as Discounted Cumulative Gain (DCG), posing a fundamental challenge in explaining the exceptional ranking performance observed in these algorithms. In this work, we make a breakthrough by establishing a connection between squared loss and ranking metrics through a Taylor expansion of the DCG-consistent surrogate loss—softmax loss. We also discover a new surrogate squared loss function, namely Ranking-Generalizable Squared (RG$^2$) loss, and conduct thorough theoretical analyses on the DCG-consistency of the proposed loss function. Later, we present an example of utilizing the RG$^2$ loss with Matrix Factorization (MF), coupled with a generalization upper bound and an ALS optimization algorithm that leverages closed-form solutions over all items. Experimental results over three public datasets demonstrate the effectiveness of the RG$^2$ loss, exhibiting ranking performance on par with, or even surpassing, the softmax loss while achieving faster convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-pu24a, title = {Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation}, author = {Pu, Yuanhao and Chen, Xiaolong and Huang, Xu and Chen, Jin and Lian, Defu and Chen, Enhong}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {41183--41203}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/pu24a/pu24a.pdf}, url = {https://proceedings.mlr.press/v235/pu24a.html}, abstract = {The weighted squared loss is a common component in several Collaborative Filtering (CF) algorithms for item recommendation, including the representative implicit Alternating Least Squares (iALS). Despite its widespread use, this loss function lacks a clear connection to ranking objectives such as Discounted Cumulative Gain (DCG), posing a fundamental challenge in explaining the exceptional ranking performance observed in these algorithms. In this work, we make a breakthrough by establishing a connection between squared loss and ranking metrics through a Taylor expansion of the DCG-consistent surrogate loss—softmax loss. We also discover a new surrogate squared loss function, namely Ranking-Generalizable Squared (RG$^2$) loss, and conduct thorough theoretical analyses on the DCG-consistency of the proposed loss function. Later, we present an example of utilizing the RG$^2$ loss with Matrix Factorization (MF), coupled with a generalization upper bound and an ALS optimization algorithm that leverages closed-form solutions over all items. Experimental results over three public datasets demonstrate the effectiveness of the RG$^2$ loss, exhibiting ranking performance on par with, or even surpassing, the softmax loss while achieving faster convergence.} }
Endnote
%0 Conference Paper %T Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation %A Yuanhao Pu %A Xiaolong Chen %A Xu Huang %A Jin Chen %A Defu Lian %A Enhong Chen %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-pu24a %I PMLR %P 41183--41203 %U https://proceedings.mlr.press/v235/pu24a.html %V 235 %X The weighted squared loss is a common component in several Collaborative Filtering (CF) algorithms for item recommendation, including the representative implicit Alternating Least Squares (iALS). Despite its widespread use, this loss function lacks a clear connection to ranking objectives such as Discounted Cumulative Gain (DCG), posing a fundamental challenge in explaining the exceptional ranking performance observed in these algorithms. In this work, we make a breakthrough by establishing a connection between squared loss and ranking metrics through a Taylor expansion of the DCG-consistent surrogate loss—softmax loss. We also discover a new surrogate squared loss function, namely Ranking-Generalizable Squared (RG$^2$) loss, and conduct thorough theoretical analyses on the DCG-consistency of the proposed loss function. Later, we present an example of utilizing the RG$^2$ loss with Matrix Factorization (MF), coupled with a generalization upper bound and an ALS optimization algorithm that leverages closed-form solutions over all items. Experimental results over three public datasets demonstrate the effectiveness of the RG$^2$ loss, exhibiting ranking performance on par with, or even surpassing, the softmax loss while achieving faster convergence.
APA
Pu, Y., Chen, X., Huang, X., Chen, J., Lian, D. & Chen, E.. (2024). Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:41183-41203 Available from https://proceedings.mlr.press/v235/pu24a.html.

Related Material