Pure Exploration in Bandits with Linear Constraints

Emil Carlsson, Debabrota Basu, Fredrik Johansson, Devdatt Dubhashi
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:334-342, 2024.

Abstract

We address the problem of identifying the optimal policy with a fixed confidence level in a multi-armed bandit setup, when \emph{the arms are subject to linear constraints}. Unlike the standard best-arm identification problem which is well studied, the optimal policy in this case may not be deterministic and could mix between several arms. This changes the geometry of the problem which we characterize via an information-theoretic lower bound. We introduce two asymptotically optimal algorithms for this setting, one based on the Track-and-Stop method and the other based on a game-theoretic approach. Both these algorithms try to track an optimal allocation based on the lower bound and computed by a weighted projection onto the boundary of a normal cone. Finally, we provide empirical results that validate our bounds and visualize how constraints change the hardness of the problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-carlsson24a, title = { Pure Exploration in Bandits with Linear Constraints }, author = {Carlsson, Emil and Basu, Debabrota and Johansson, Fredrik and Dubhashi, Devdatt}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {334--342}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/carlsson24a/carlsson24a.pdf}, url = {https://proceedings.mlr.press/v238/carlsson24a.html}, abstract = { We address the problem of identifying the optimal policy with a fixed confidence level in a multi-armed bandit setup, when \emph{the arms are subject to linear constraints}. Unlike the standard best-arm identification problem which is well studied, the optimal policy in this case may not be deterministic and could mix between several arms. This changes the geometry of the problem which we characterize via an information-theoretic lower bound. We introduce two asymptotically optimal algorithms for this setting, one based on the Track-and-Stop method and the other based on a game-theoretic approach. Both these algorithms try to track an optimal allocation based on the lower bound and computed by a weighted projection onto the boundary of a normal cone. Finally, we provide empirical results that validate our bounds and visualize how constraints change the hardness of the problem. } }
Endnote
%0 Conference Paper %T Pure Exploration in Bandits with Linear Constraints %A Emil Carlsson %A Debabrota Basu %A Fredrik Johansson %A Devdatt Dubhashi %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-carlsson24a %I PMLR %P 334--342 %U https://proceedings.mlr.press/v238/carlsson24a.html %V 238 %X We address the problem of identifying the optimal policy with a fixed confidence level in a multi-armed bandit setup, when \emph{the arms are subject to linear constraints}. Unlike the standard best-arm identification problem which is well studied, the optimal policy in this case may not be deterministic and could mix between several arms. This changes the geometry of the problem which we characterize via an information-theoretic lower bound. We introduce two asymptotically optimal algorithms for this setting, one based on the Track-and-Stop method and the other based on a game-theoretic approach. Both these algorithms try to track an optimal allocation based on the lower bound and computed by a weighted projection onto the boundary of a normal cone. Finally, we provide empirical results that validate our bounds and visualize how constraints change the hardness of the problem.
APA
Carlsson, E., Basu, D., Johansson, F. & Dubhashi, D.. (2024). Pure Exploration in Bandits with Linear Constraints . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:334-342 Available from https://proceedings.mlr.press/v238/carlsson24a.html.

Related Material