Generative Adversarial User Model for Reinforcement Learning Based Recommendation System

Xinshi Chen, Shuang Li, Hui Li, Shaohua Jiang, Yuan Qi, Le Song
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1052-1061, 2019.

Abstract

There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging. In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel Cascading DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-chen19f, title = {Generative Adversarial User Model for Reinforcement Learning Based Recommendation System}, author = {Chen, Xinshi and Li, Shuang and Li, Hui and Jiang, Shaohua and Qi, Yuan and Song, Le}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1052--1061}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/chen19f/chen19f.pdf}, url = {https://proceedings.mlr.press/v97/chen19f.html}, abstract = {There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging. In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel Cascading DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system.} }
Endnote
%0 Conference Paper %T Generative Adversarial User Model for Reinforcement Learning Based Recommendation System %A Xinshi Chen %A Shuang Li %A Hui Li %A Shaohua Jiang %A Yuan Qi %A Le Song %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-chen19f %I PMLR %P 1052--1061 %U https://proceedings.mlr.press/v97/chen19f.html %V 97 %X There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging. In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel Cascading DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system.
APA
Chen, X., Li, S., Li, H., Jiang, S., Qi, Y. & Song, L.. (2019). Generative Adversarial User Model for Reinforcement Learning Based Recommendation System. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1052-1061 Available from https://proceedings.mlr.press/v97/chen19f.html.

Related Material