Conservative Exploration using Interleaving

Sumeet Katariya, Branislav Kveton, Zheng Wen, Vamsi K. Potluru
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:954-963, 2019.

Abstract

In many practical problems, a learning agent may want to learn the best action in hindsight without ever taking a bad action, which is much worse than a default production action. In general, this is impossible because the agent has to explore unknown actions, some of which can be bad, to learn better actions. However, when the actions are structured, this is possible if the unknown action can be evaluated by interleaving it with the default action. We formalize this concept as learning in stochastic combinatorial semi-bandits with exchangeable actions. We design efficient learning algorithms for this problem, bound their n-step regret, and evaluate them on both synthetic and real-world problems. Our real-world experiments show that our algorithms can learn to recommend K most attractive movies without ever making disastrous recommendations, both overall and subject to a diversity constraint.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-katariya19a, title = {Conservative Exploration using Interleaving}, author = {Katariya, Sumeet and Kveton, Branislav and Wen, Zheng and Potluru, Vamsi K.}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {954--963}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/katariya19a/katariya19a.pdf}, url = {https://proceedings.mlr.press/v89/katariya19a.html}, abstract = {In many practical problems, a learning agent may want to learn the best action in hindsight without ever taking a bad action, which is much worse than a default production action. In general, this is impossible because the agent has to explore unknown actions, some of which can be bad, to learn better actions. However, when the actions are structured, this is possible if the unknown action can be evaluated by interleaving it with the default action. We formalize this concept as learning in stochastic combinatorial semi-bandits with exchangeable actions. We design efficient learning algorithms for this problem, bound their n-step regret, and evaluate them on both synthetic and real-world problems. Our real-world experiments show that our algorithms can learn to recommend K most attractive movies without ever making disastrous recommendations, both overall and subject to a diversity constraint.} }
Endnote
%0 Conference Paper %T Conservative Exploration using Interleaving %A Sumeet Katariya %A Branislav Kveton %A Zheng Wen %A Vamsi K. Potluru %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-katariya19a %I PMLR %P 954--963 %U https://proceedings.mlr.press/v89/katariya19a.html %V 89 %X In many practical problems, a learning agent may want to learn the best action in hindsight without ever taking a bad action, which is much worse than a default production action. In general, this is impossible because the agent has to explore unknown actions, some of which can be bad, to learn better actions. However, when the actions are structured, this is possible if the unknown action can be evaluated by interleaving it with the default action. We formalize this concept as learning in stochastic combinatorial semi-bandits with exchangeable actions. We design efficient learning algorithms for this problem, bound their n-step regret, and evaluate them on both synthetic and real-world problems. Our real-world experiments show that our algorithms can learn to recommend K most attractive movies without ever making disastrous recommendations, both overall and subject to a diversity constraint.
APA
Katariya, S., Kveton, B., Wen, Z. & Potluru, V.K.. (2019). Conservative Exploration using Interleaving. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:954-963 Available from https://proceedings.mlr.press/v89/katariya19a.html.

Related Material