Stochastic Rank-1 Bandits

Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Claire Vernade, Zheng Wen
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:392-401, 2017.

Abstract

We propose stochastic rank-1 bandits, a class of online learning problems where at each step a learning agent chooses a pair of row and column arms, and receives the product of their values as a reward. The main challenge of the problem is that the individual values of the row and column are unobserved. We assume that these values are stochastic and drawn independently. We propose a computationally-efficient algorithm for solving our problem, which we call Rank1Elim. We derive a O((K + L) (1 / Delta) log n) upper bound on its n-step regret, where K is the number of rows, L is the number of columns, and Delta is the minimum of the row and column gaps; under the assumption that the mean row and column rewards are bounded away from zero. To the best of our knowledge, we present the first bandit algorithm that finds the maximum entry of a rank-1 matrix whose regret is linear in K + L, 1 / Delta, and log n. We also derive a nearly matching lower bound. Finally, we evaluate Rank1Elim empirically on multiple problems. We observe that it leverages the structure of our problems and can learn near-optimal solutions even if our modeling assumptions are mildly violated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-katariya17a, title = {{Stochastic Rank-1 Bandits}}, author = {Katariya, Sumeet and Kveton, Branislav and Szepesvari, Csaba and Vernade, Claire and Wen, Zheng}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {392--401}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/katariya17a/katariya17a.pdf}, url = {https://proceedings.mlr.press/v54/katariya17a.html}, abstract = {We propose stochastic rank-1 bandits, a class of online learning problems where at each step a learning agent chooses a pair of row and column arms, and receives the product of their values as a reward. The main challenge of the problem is that the individual values of the row and column are unobserved. We assume that these values are stochastic and drawn independently. We propose a computationally-efficient algorithm for solving our problem, which we call Rank1Elim. We derive a O((K + L) (1 / Delta) log n) upper bound on its n-step regret, where K is the number of rows, L is the number of columns, and Delta is the minimum of the row and column gaps; under the assumption that the mean row and column rewards are bounded away from zero. To the best of our knowledge, we present the first bandit algorithm that finds the maximum entry of a rank-1 matrix whose regret is linear in K + L, 1 / Delta, and log n. We also derive a nearly matching lower bound. Finally, we evaluate Rank1Elim empirically on multiple problems. We observe that it leverages the structure of our problems and can learn near-optimal solutions even if our modeling assumptions are mildly violated.} }
Endnote
%0 Conference Paper %T Stochastic Rank-1 Bandits %A Sumeet Katariya %A Branislav Kveton %A Csaba Szepesvari %A Claire Vernade %A Zheng Wen %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-katariya17a %I PMLR %P 392--401 %U https://proceedings.mlr.press/v54/katariya17a.html %V 54 %X We propose stochastic rank-1 bandits, a class of online learning problems where at each step a learning agent chooses a pair of row and column arms, and receives the product of their values as a reward. The main challenge of the problem is that the individual values of the row and column are unobserved. We assume that these values are stochastic and drawn independently. We propose a computationally-efficient algorithm for solving our problem, which we call Rank1Elim. We derive a O((K + L) (1 / Delta) log n) upper bound on its n-step regret, where K is the number of rows, L is the number of columns, and Delta is the minimum of the row and column gaps; under the assumption that the mean row and column rewards are bounded away from zero. To the best of our knowledge, we present the first bandit algorithm that finds the maximum entry of a rank-1 matrix whose regret is linear in K + L, 1 / Delta, and log n. We also derive a nearly matching lower bound. Finally, we evaluate Rank1Elim empirically on multiple problems. We observe that it leverages the structure of our problems and can learn near-optimal solutions even if our modeling assumptions are mildly violated.
APA
Katariya, S., Kveton, B., Szepesvari, C., Vernade, C. & Wen, Z.. (2017). Stochastic Rank-1 Bandits. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:392-401 Available from https://proceedings.mlr.press/v54/katariya17a.html.

Related Material