Self-Tuning Bandits over Unknown Covariate-Shifts

Joseph Suk, Samory Kpotufe
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:1114-1156, 2021.

Abstract

Bandits with covariates, a.k.a. \emph{contextual bandits}, address situations where optimal actions (or arms) at a given time $t$, depend on a \emph{context} $x_t$, e.g., a new patient’s medical history, a consumer’s past purchases. While it is understood that the distribution of contexts might change over time, e.g., due to seasonalities, or deployment to new environments, the bulk of studies concern the most adversarial such changes, resulting in regret bounds that are often worst-case in nature. \emph{Covariate-shift} on the other hand has been considered in classification as a middle-ground formalism that can capture mild to relatively severe changes in distributions. We consider nonparametric bandits under such middle-ground scenarios, and derive new regret bounds that tightly capture a continuum of changes in context distribution. Furthermore, we show that these rates can be \emph{adaptively} attained without knowledge of the time of shift (change point) nor the amount of shift.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-suk21a, title = {Self-Tuning Bandits over Unknown Covariate-Shifts}, author = {Suk, Joseph and Kpotufe, Samory}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {1114--1156}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/suk21a/suk21a.pdf}, url = {https://proceedings.mlr.press/v132/suk21a.html}, abstract = {Bandits with covariates, a.k.a. \emph{contextual bandits}, address situations where optimal actions (or arms) at a given time $t$, depend on a \emph{context} $x_t$, e.g., a new patient’s medical history, a consumer’s past purchases. While it is understood that the distribution of contexts might change over time, e.g., due to seasonalities, or deployment to new environments, the bulk of studies concern the most adversarial such changes, resulting in regret bounds that are often worst-case in nature. \emph{Covariate-shift} on the other hand has been considered in classification as a middle-ground formalism that can capture mild to relatively severe changes in distributions. We consider nonparametric bandits under such middle-ground scenarios, and derive new regret bounds that tightly capture a continuum of changes in context distribution. Furthermore, we show that these rates can be \emph{adaptively} attained without knowledge of the time of shift (change point) nor the amount of shift.} }
Endnote
%0 Conference Paper %T Self-Tuning Bandits over Unknown Covariate-Shifts %A Joseph Suk %A Samory Kpotufe %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-suk21a %I PMLR %P 1114--1156 %U https://proceedings.mlr.press/v132/suk21a.html %V 132 %X Bandits with covariates, a.k.a. \emph{contextual bandits}, address situations where optimal actions (or arms) at a given time $t$, depend on a \emph{context} $x_t$, e.g., a new patient’s medical history, a consumer’s past purchases. While it is understood that the distribution of contexts might change over time, e.g., due to seasonalities, or deployment to new environments, the bulk of studies concern the most adversarial such changes, resulting in regret bounds that are often worst-case in nature. \emph{Covariate-shift} on the other hand has been considered in classification as a middle-ground formalism that can capture mild to relatively severe changes in distributions. We consider nonparametric bandits under such middle-ground scenarios, and derive new regret bounds that tightly capture a continuum of changes in context distribution. Furthermore, we show that these rates can be \emph{adaptively} attained without knowledge of the time of shift (change point) nor the amount of shift.
APA
Suk, J. & Kpotufe, S.. (2021). Self-Tuning Bandits over Unknown Covariate-Shifts. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:1114-1156 Available from https://proceedings.mlr.press/v132/suk21a.html.

Related Material