Enhancing Cross-Category Learning in Recommendation Systems with Multi-Layer Embedding Training

Zihao Deng, Benjamin Ghaemmaghami, Ashish Kumar Singh, Benjamin Cho, Leo Orshansky, Mattan Erez, Michael Orshansky
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:263-278, 2024.

Abstract

Modern DNN-based recommendation systems rely on training-derived embeddings of sparse features. Input sparsity makes obtaining high-quality embeddings for rarely-occurring categories harder as their representations are updated infrequently. We demonstrate a training-time technique to produce superior embeddings via effective cross-category learning and theoretically explain its surprising effectiveness. The scheme, termed the multi-layer embeddings training (MLET), trains embeddings using factorization of the embedding layer, with an inner dimension higher than the target embedding dimension. For inference efficiency, MLET converts the trained two-layer embedding into a single-layer one thus keeping inference-time model size unchanged. Empirical superiority of MLET is puzzling as its search space is not larger than that of the single-layer embedding. The strong dependence of MLET on the inner dimension is even more surprising. We develop a theory that explains both of these behaviors by showing that MLET creates an adaptive update mechanism modulated by the singular vectors of embeddings. When tested on multiple state-of-the-art recommendation models for click-through rate (CTR) prediction tasks, MLET consistently produces better models, especially for rare items. At constant model quality, MLET allows embedding dimension, and model size, reduction by up to 16x, and 5.8x on average, across the models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-deng24a, title = {Enhancing Cross-Category Learning in Recommendation Systems with Multi-Layer Embedding Training}, author = {Deng, Zihao and Ghaemmaghami, Benjamin and Singh, Ashish Kumar and Cho, Benjamin and Orshansky, Leo and Erez, Mattan and Orshansky, Michael}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {263--278}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/deng24a/deng24a.pdf}, url = {https://proceedings.mlr.press/v222/deng24a.html}, abstract = {Modern DNN-based recommendation systems rely on training-derived embeddings of sparse features. Input sparsity makes obtaining high-quality embeddings for rarely-occurring categories harder as their representations are updated infrequently. We demonstrate a training-time technique to produce superior embeddings via effective cross-category learning and theoretically explain its surprising effectiveness. The scheme, termed the multi-layer embeddings training (MLET), trains embeddings using factorization of the embedding layer, with an inner dimension higher than the target embedding dimension. For inference efficiency, MLET converts the trained two-layer embedding into a single-layer one thus keeping inference-time model size unchanged. Empirical superiority of MLET is puzzling as its search space is not larger than that of the single-layer embedding. The strong dependence of MLET on the inner dimension is even more surprising. We develop a theory that explains both of these behaviors by showing that MLET creates an adaptive update mechanism modulated by the singular vectors of embeddings. When tested on multiple state-of-the-art recommendation models for click-through rate (CTR) prediction tasks, MLET consistently produces better models, especially for rare items. At constant model quality, MLET allows embedding dimension, and model size, reduction by up to 16x, and 5.8x on average, across the models.} }
Endnote
%0 Conference Paper %T Enhancing Cross-Category Learning in Recommendation Systems with Multi-Layer Embedding Training %A Zihao Deng %A Benjamin Ghaemmaghami %A Ashish Kumar Singh %A Benjamin Cho %A Leo Orshansky %A Mattan Erez %A Michael Orshansky %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-deng24a %I PMLR %P 263--278 %U https://proceedings.mlr.press/v222/deng24a.html %V 222 %X Modern DNN-based recommendation systems rely on training-derived embeddings of sparse features. Input sparsity makes obtaining high-quality embeddings for rarely-occurring categories harder as their representations are updated infrequently. We demonstrate a training-time technique to produce superior embeddings via effective cross-category learning and theoretically explain its surprising effectiveness. The scheme, termed the multi-layer embeddings training (MLET), trains embeddings using factorization of the embedding layer, with an inner dimension higher than the target embedding dimension. For inference efficiency, MLET converts the trained two-layer embedding into a single-layer one thus keeping inference-time model size unchanged. Empirical superiority of MLET is puzzling as its search space is not larger than that of the single-layer embedding. The strong dependence of MLET on the inner dimension is even more surprising. We develop a theory that explains both of these behaviors by showing that MLET creates an adaptive update mechanism modulated by the singular vectors of embeddings. When tested on multiple state-of-the-art recommendation models for click-through rate (CTR) prediction tasks, MLET consistently produces better models, especially for rare items. At constant model quality, MLET allows embedding dimension, and model size, reduction by up to 16x, and 5.8x on average, across the models.
APA
Deng, Z., Ghaemmaghami, B., Singh, A.K., Cho, B., Orshansky, L., Erez, M. & Orshansky, M.. (2024). Enhancing Cross-Category Learning in Recommendation Systems with Multi-Layer Embedding Training. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:263-278 Available from https://proceedings.mlr.press/v222/deng24a.html.

Related Material