Multitask Bandit Learning Through Heterogeneous Feedback Aggregation

Zhi Wang, Chicheng Zhang, Manish Kumar Singh, Laurel Riek, Kamalika Chaudhuri
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1531-1539, 2021.

Abstract

In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol. We formulate this problem as the $\epsilon$-multi-player multi-armed bandit problem, in which a set of players concurrently interact with a set of arms, and for each arm, the reward distributions for all players are similar but not necessarily identical. We develop an upper confidence bound-based algorithm, RobustAgg($\epsilon$), that adaptively aggregates rewards collected by different players. In the setting where an upper bound on the pairwise dissimilarities of reward distributions between players is known, we achieve instance-dependent regret guarantees that depend on the amenability of information sharing across players. We complement these upper bounds with nearly matching lower bounds. In the setting where pairwise dissimilarities are unknown, we provide a lower bound, as well as an algorithm that trades off minimax regret guarantees for adaptivity to unknown similarity structure.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-wang21e, title = { Multitask Bandit Learning Through Heterogeneous Feedback Aggregation }, author = {Wang, Zhi and Zhang, Chicheng and Kumar Singh, Manish and Riek, Laurel and Chaudhuri, Kamalika}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1531--1539}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/wang21e/wang21e.pdf}, url = {https://proceedings.mlr.press/v130/wang21e.html}, abstract = { In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol. We formulate this problem as the $\epsilon$-multi-player multi-armed bandit problem, in which a set of players concurrently interact with a set of arms, and for each arm, the reward distributions for all players are similar but not necessarily identical. We develop an upper confidence bound-based algorithm, RobustAgg($\epsilon$), that adaptively aggregates rewards collected by different players. In the setting where an upper bound on the pairwise dissimilarities of reward distributions between players is known, we achieve instance-dependent regret guarantees that depend on the amenability of information sharing across players. We complement these upper bounds with nearly matching lower bounds. In the setting where pairwise dissimilarities are unknown, we provide a lower bound, as well as an algorithm that trades off minimax regret guarantees for adaptivity to unknown similarity structure. } }
Endnote
%0 Conference Paper %T Multitask Bandit Learning Through Heterogeneous Feedback Aggregation %A Zhi Wang %A Chicheng Zhang %A Manish Kumar Singh %A Laurel Riek %A Kamalika Chaudhuri %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-wang21e %I PMLR %P 1531--1539 %U https://proceedings.mlr.press/v130/wang21e.html %V 130 %X In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol. We formulate this problem as the $\epsilon$-multi-player multi-armed bandit problem, in which a set of players concurrently interact with a set of arms, and for each arm, the reward distributions for all players are similar but not necessarily identical. We develop an upper confidence bound-based algorithm, RobustAgg($\epsilon$), that adaptively aggregates rewards collected by different players. In the setting where an upper bound on the pairwise dissimilarities of reward distributions between players is known, we achieve instance-dependent regret guarantees that depend on the amenability of information sharing across players. We complement these upper bounds with nearly matching lower bounds. In the setting where pairwise dissimilarities are unknown, we provide a lower bound, as well as an algorithm that trades off minimax regret guarantees for adaptivity to unknown similarity structure.
APA
Wang, Z., Zhang, C., Kumar Singh, M., Riek, L. & Chaudhuri, K.. (2021). Multitask Bandit Learning Through Heterogeneous Feedback Aggregation . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1531-1539 Available from https://proceedings.mlr.press/v130/wang21e.html.

Related Material