Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems

Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:2256-2266, 2022.

Abstract

One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error. To overcome, we propose to learn a meta future gradient generator that forecasts the gradient information of the future data distribution for training so that the recommendation model can be trained as if we were able to look ahead at the future of its deployment. Compared with Batch Update, a widely used paradigm, our theory suggests that the proposed algorithm achieves smaller temporal domain generalization error measured by a gradient variation term in a local regret. We demonstrate the empirical advantage by comparing with various representative baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-ye22b, title = {Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems}, author = {Ye, Mao and Jiang, Ruichen and Wang, Haoxiang and Choudhary, Dhruv and Du, Xiaocong and Bhushanam, Bhargav and Mokhtari, Aryan and Kejariwal, Arun and Liu, Qiang}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {2256--2266}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/ye22b/ye22b.pdf}, url = {https://proceedings.mlr.press/v180/ye22b.html}, abstract = {One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error. To overcome, we propose to learn a meta future gradient generator that forecasts the gradient information of the future data distribution for training so that the recommendation model can be trained as if we were able to look ahead at the future of its deployment. Compared with Batch Update, a widely used paradigm, our theory suggests that the proposed algorithm achieves smaller temporal domain generalization error measured by a gradient variation term in a local regret. We demonstrate the empirical advantage by comparing with various representative baselines.} }
Endnote
%0 Conference Paper %T Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems %A Mao Ye %A Ruichen Jiang %A Haoxiang Wang %A Dhruv Choudhary %A Xiaocong Du %A Bhargav Bhushanam %A Aryan Mokhtari %A Arun Kejariwal %A Qiang Liu %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-ye22b %I PMLR %P 2256--2266 %U https://proceedings.mlr.press/v180/ye22b.html %V 180 %X One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error. To overcome, we propose to learn a meta future gradient generator that forecasts the gradient information of the future data distribution for training so that the recommendation model can be trained as if we were able to look ahead at the future of its deployment. Compared with Batch Update, a widely used paradigm, our theory suggests that the proposed algorithm achieves smaller temporal domain generalization error measured by a gradient variation term in a local regret. We demonstrate the empirical advantage by comparing with various representative baselines.
APA
Ye, M., Jiang, R., Wang, H., Choudhary, D., Du, X., Bhushanam, B., Mokhtari, A., Kejariwal, A. & Liu, Q.. (2022). Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:2256-2266 Available from https://proceedings.mlr.press/v180/ye22b.html.

Related Material