Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits

Aurelien Bibaut, Antoine Chambaz, Mark Laan
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:1099-1108, 2020.

Abstract

We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of Dudik et al. [2011]. We prove the first regret-optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an $\varepsilon$-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-bibaut20a, title = {Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits}, author = {Bibaut, Aurelien and Chambaz, Antoine and van der Laan, Mark}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {1099--1108}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/bibaut20a/bibaut20a.pdf}, url = { http://proceedings.mlr.press/v124/bibaut20a.html }, abstract = {We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of Dudik et al. [2011]. We prove the first regret-optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an $\varepsilon$-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented.} }
Endnote
%0 Conference Paper %T Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits %A Aurelien Bibaut %A Antoine Chambaz %A Mark Laan %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-bibaut20a %I PMLR %P 1099--1108 %U http://proceedings.mlr.press/v124/bibaut20a.html %V 124 %X We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of Dudik et al. [2011]. We prove the first regret-optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an $\varepsilon$-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented.
APA
Bibaut, A., Chambaz, A. & Laan, M.. (2020). Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:1099-1108 Available from http://proceedings.mlr.press/v124/bibaut20a.html .

Related Material