Spectral Bandits for Smooth Graph Functions

Michal Valko, Remi Munos, Branislav Kveton, Tomáš Kocák
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):46-54, 2014.

Abstract

Smooth functions on graphs have wide applications in manifold and semi-supervised learning. In this paper, we study a bandit problem where the payoffs of arms are smooth on a graph. This framework is suitable for solving online learning problems that involve graphs, such as content-based recommendation. In this problem, each item we can recommend is a node and its expected rating is similar to its neighbors. The goal is to recommend items that have high expected ratings. We aim for the algorithms where the cumulative regret with respect to the optimal policy would not scale poorly with the number of nodes. In particular, we introduce the notion of an effective dimension, which is small in real-world graphs, and propose two algorithms for solving our problem that scale linearly and sublinearly in this dimension. Our experiments on real-world content recommendation problem show that a good estimator of user preferences for thousands of items can be learned from just tens of nodes evaluations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-valko14, title = {Spectral Bandits for Smooth Graph Functions}, author = {Michal Valko and Remi Munos and Branislav Kveton and Tomáš Kocák}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {46--54}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/valko14.pdf}, url = {http://proceedings.mlr.press/v32/valko14.html}, abstract = {Smooth functions on graphs have wide applications in manifold and semi-supervised learning. In this paper, we study a bandit problem where the payoffs of arms are smooth on a graph. This framework is suitable for solving online learning problems that involve graphs, such as content-based recommendation. In this problem, each item we can recommend is a node and its expected rating is similar to its neighbors. The goal is to recommend items that have high expected ratings. We aim for the algorithms where the cumulative regret with respect to the optimal policy would not scale poorly with the number of nodes. In particular, we introduce the notion of an effective dimension, which is small in real-world graphs, and propose two algorithms for solving our problem that scale linearly and sublinearly in this dimension. Our experiments on real-world content recommendation problem show that a good estimator of user preferences for thousands of items can be learned from just tens of nodes evaluations.} }
Endnote
%0 Conference Paper %T Spectral Bandits for Smooth Graph Functions %A Michal Valko %A Remi Munos %A Branislav Kveton %A Tomáš Kocák %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-valko14 %I PMLR %J Proceedings of Machine Learning Research %P 46--54 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X Smooth functions on graphs have wide applications in manifold and semi-supervised learning. In this paper, we study a bandit problem where the payoffs of arms are smooth on a graph. This framework is suitable for solving online learning problems that involve graphs, such as content-based recommendation. In this problem, each item we can recommend is a node and its expected rating is similar to its neighbors. The goal is to recommend items that have high expected ratings. We aim for the algorithms where the cumulative regret with respect to the optimal policy would not scale poorly with the number of nodes. In particular, we introduce the notion of an effective dimension, which is small in real-world graphs, and propose two algorithms for solving our problem that scale linearly and sublinearly in this dimension. Our experiments on real-world content recommendation problem show that a good estimator of user preferences for thousands of items can be learned from just tens of nodes evaluations.
RIS
TY - CPAPER TI - Spectral Bandits for Smooth Graph Functions AU - Michal Valko AU - Remi Munos AU - Branislav Kveton AU - Tomáš Kocák BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-valko14 PB - PMLR SP - 46 DP - PMLR EP - 54 L1 - http://proceedings.mlr.press/v32/valko14.pdf UR - http://proceedings.mlr.press/v32/valko14.html AB - Smooth functions on graphs have wide applications in manifold and semi-supervised learning. In this paper, we study a bandit problem where the payoffs of arms are smooth on a graph. This framework is suitable for solving online learning problems that involve graphs, such as content-based recommendation. In this problem, each item we can recommend is a node and its expected rating is similar to its neighbors. The goal is to recommend items that have high expected ratings. We aim for the algorithms where the cumulative regret with respect to the optimal policy would not scale poorly with the number of nodes. In particular, we introduce the notion of an effective dimension, which is small in real-world graphs, and propose two algorithms for solving our problem that scale linearly and sublinearly in this dimension. Our experiments on real-world content recommendation problem show that a good estimator of user preferences for thousands of items can be learned from just tens of nodes evaluations. ER -
APA
Valko, M., Munos, R., Kveton, B. & Kocák, T.. (2014). Spectral Bandits for Smooth Graph Functions. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):46-54

Related Material