Robust Pure Exploration in Linear Bandits with Limited Budget

Ayya Alieva, Ashok Cutkosky, Abhimanyu Das
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:187-195, 2021.

Abstract

We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-alieva21a, title = {Robust Pure Exploration in Linear Bandits with Limited Budget}, author = {Alieva, Ayya and Cutkosky, Ashok and Das, Abhimanyu}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {187--195}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/alieva21a/alieva21a.pdf}, url = {https://proceedings.mlr.press/v139/alieva21a.html}, abstract = {We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.} }
Endnote
%0 Conference Paper %T Robust Pure Exploration in Linear Bandits with Limited Budget %A Ayya Alieva %A Ashok Cutkosky %A Abhimanyu Das %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-alieva21a %I PMLR %P 187--195 %U https://proceedings.mlr.press/v139/alieva21a.html %V 139 %X We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.
APA
Alieva, A., Cutkosky, A. & Das, A.. (2021). Robust Pure Exploration in Linear Bandits with Limited Budget. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:187-195 Available from https://proceedings.mlr.press/v139/alieva21a.html.

Related Material