Bandits with partially observable confounded data

Guy Tennenholtz, Uri Shalit, Shie Mannor, Yonathan Efroni
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:430-439, 2021.

Abstract

We study linear contextual bandits with access to a large, confounded, offline dataset that was sampled from some fixed policy. We show that this problem is closely related to a variant of the bandit problem with side information. We construct a linear bandit algorithm that takes advantage of the projected information, and prove regret bounds. Our results demonstrate the ability to take advantage of confounded offline data. Particularly, we prove regret bounds that improve current bounds by a factor related to the visible dimensionality of the contexts in the data. Our results indicate that confounded offline data can significantly improve online learning algorithms. Finally, we demonstrate various characteristics of our approach through synthetic simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-tennenholtz21a, title = {Bandits with partially observable confounded data}, author = {Tennenholtz, Guy and Shalit, Uri and Mannor, Shie and Efroni, Yonathan}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {430--439}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/tennenholtz21a/tennenholtz21a.pdf}, url = {https://proceedings.mlr.press/v161/tennenholtz21a.html}, abstract = {We study linear contextual bandits with access to a large, confounded, offline dataset that was sampled from some fixed policy. We show that this problem is closely related to a variant of the bandit problem with side information. We construct a linear bandit algorithm that takes advantage of the projected information, and prove regret bounds. Our results demonstrate the ability to take advantage of confounded offline data. Particularly, we prove regret bounds that improve current bounds by a factor related to the visible dimensionality of the contexts in the data. Our results indicate that confounded offline data can significantly improve online learning algorithms. Finally, we demonstrate various characteristics of our approach through synthetic simulations.} }
Endnote
%0 Conference Paper %T Bandits with partially observable confounded data %A Guy Tennenholtz %A Uri Shalit %A Shie Mannor %A Yonathan Efroni %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-tennenholtz21a %I PMLR %P 430--439 %U https://proceedings.mlr.press/v161/tennenholtz21a.html %V 161 %X We study linear contextual bandits with access to a large, confounded, offline dataset that was sampled from some fixed policy. We show that this problem is closely related to a variant of the bandit problem with side information. We construct a linear bandit algorithm that takes advantage of the projected information, and prove regret bounds. Our results demonstrate the ability to take advantage of confounded offline data. Particularly, we prove regret bounds that improve current bounds by a factor related to the visible dimensionality of the contexts in the data. Our results indicate that confounded offline data can significantly improve online learning algorithms. Finally, we demonstrate various characteristics of our approach through synthetic simulations.
APA
Tennenholtz, G., Shalit, U., Mannor, S. & Efroni, Y.. (2021). Bandits with partially observable confounded data. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:430-439 Available from https://proceedings.mlr.press/v161/tennenholtz21a.html.

Related Material