Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm

Tongyi Cao, Akshay Krishnamurthy
; Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:558-588, 2019.

Abstract

We design new algorithms for the combinatorial pure exploration problem in the multi-arm bandit framework. In this problem, we are given $K$ distributions and a collection of subsets $\mathcal{V} \subset 2^{[K]}$ of these distributions, and we would like to find the subset $v \in \mathcal{V}$ that has largest mean, while collecting, in a sequential fashion, as few samples from the distributions as possible. In both the fixed budget and fixed confidence settings, our algorithms achieve new sample-complexity bounds that provide polynomial improvements on previous results in some settings. Via an information-theoretic lower bound, we show that no approach based on uniform sampling can improve on ours in any regime, yielding the first interactive algorithms for this problem with this basic property. Computationally, we show how to efficiently implement our fixed confidence algorithm whenever $\mathcal{V}$ supports efficient linear optimization. Our results involve precise concentration-of-measure arguments and a new algorithm for linear programming with exponentially many constraints.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-cao19a, title = {Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm}, author = {Cao, Tongyi and Krishnamurthy, Akshay}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {558--588}, year = {2019}, editor = {Alina Beygelzimer and Daniel Hsu}, volume = {99}, series = {Proceedings of Machine Learning Research}, address = {Phoenix, USA}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/cao19a/cao19a.pdf}, url = {http://proceedings.mlr.press/v99/cao19a.html}, abstract = {We design new algorithms for the combinatorial pure exploration problem in the multi-arm bandit framework. In this problem, we are given $K$ distributions and a collection of subsets $\mathcal{V} \subset 2^{[K]}$ of these distributions, and we would like to find the subset $v \in \mathcal{V}$ that has largest mean, while collecting, in a sequential fashion, as few samples from the distributions as possible. In both the fixed budget and fixed confidence settings, our algorithms achieve new sample-complexity bounds that provide polynomial improvements on previous results in some settings. Via an information-theoretic lower bound, we show that no approach based on uniform sampling can improve on ours in any regime, yielding the first interactive algorithms for this problem with this basic property. Computationally, we show how to efficiently implement our fixed confidence algorithm whenever $\mathcal{V}$ supports efficient linear optimization. Our results involve precise concentration-of-measure arguments and a new algorithm for linear programming with exponentially many constraints.} }
Endnote
%0 Conference Paper %T Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm %A Tongyi Cao %A Akshay Krishnamurthy %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-cao19a %I PMLR %J Proceedings of Machine Learning Research %P 558--588 %U http://proceedings.mlr.press %V 99 %W PMLR %X We design new algorithms for the combinatorial pure exploration problem in the multi-arm bandit framework. In this problem, we are given $K$ distributions and a collection of subsets $\mathcal{V} \subset 2^{[K]}$ of these distributions, and we would like to find the subset $v \in \mathcal{V}$ that has largest mean, while collecting, in a sequential fashion, as few samples from the distributions as possible. In both the fixed budget and fixed confidence settings, our algorithms achieve new sample-complexity bounds that provide polynomial improvements on previous results in some settings. Via an information-theoretic lower bound, we show that no approach based on uniform sampling can improve on ours in any regime, yielding the first interactive algorithms for this problem with this basic property. Computationally, we show how to efficiently implement our fixed confidence algorithm whenever $\mathcal{V}$ supports efficient linear optimization. Our results involve precise concentration-of-measure arguments and a new algorithm for linear programming with exponentially many constraints.
APA
Cao, T. & Krishnamurthy, A.. (2019). Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:558-588

Related Material