PAC Lower Bounds and Efficient Algorithms for The Max K-Armed Bandit Problem

Yahel David, Nahum Shimkin
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:878-887, 2016.

Abstract

We consider the Max K-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the \em tail function of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any (ε,δ)-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We provide an analysis of the robustness of the proposed algorithm to the model assumptions, and further compare its performance to the simple non-adaptive variant, in which the arms are chosen randomly at each stage.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-david16, title = {PAC Lower Bounds and Efficient Algorithms for The Max $K$-Armed Bandit Problem}, author = {David, Yahel and Shimkin, Nahum}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {878--887}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/david16.pdf}, url = {https://proceedings.mlr.press/v48/david16.html}, abstract = {We consider the Max K-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the \em tail function of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any (ε,δ)-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We provide an analysis of the robustness of the proposed algorithm to the model assumptions, and further compare its performance to the simple non-adaptive variant, in which the arms are chosen randomly at each stage.} }
Endnote
%0 Conference Paper %T PAC Lower Bounds and Efficient Algorithms for The Max K-Armed Bandit Problem %A Yahel David %A Nahum Shimkin %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-david16 %I PMLR %P 878--887 %U https://proceedings.mlr.press/v48/david16.html %V 48 %X We consider the Max K-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the \em tail function of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any (ε,δ)-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We provide an analysis of the robustness of the proposed algorithm to the model assumptions, and further compare its performance to the simple non-adaptive variant, in which the arms are chosen randomly at each stage.
RIS
TY - CPAPER TI - PAC Lower Bounds and Efficient Algorithms for The Max K-Armed Bandit Problem AU - Yahel David AU - Nahum Shimkin BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-david16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 878 EP - 887 L1 - http://proceedings.mlr.press/v48/david16.pdf UR - https://proceedings.mlr.press/v48/david16.html AB - We consider the Max K-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the \em tail function of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any (ε,δ)-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We provide an analysis of the robustness of the proposed algorithm to the model assumptions, and further compare its performance to the simple non-adaptive variant, in which the arms are chosen randomly at each stage. ER -
APA
David, Y. & Shimkin, N.. (2016). PAC Lower Bounds and Efficient Algorithms for The Max K-Armed Bandit Problem. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:878-887 Available from https://proceedings.mlr.press/v48/david16.html.

Related Material