Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles

Aldo Gael Carranza, Sanath Kumar Krishnamurthy, Susan Athey
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:7190-7212, 2023.

Abstract

Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any function that explains the reward differences between actions, that is, the treatment effects. Motivated by this observation, building on recent work on oracle-based bandit algorithms, we provide the first reduction of contextual bandits to general-purpose heterogeneous treatment effect estimation, and we design a simple and computationally efficient algorithm based on this reduction. Our theoretical and experimental results demonstrate that heterogeneous treatment effect estimation in contextual bandits offers practical advantages over reward estimation including more efficient model estimation and greater flexibility to model misspecification.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-carranza23a, title = {Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles}, author = {Carranza, Aldo Gael and Krishnamurthy, Sanath Kumar and Athey, Susan}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {7190--7212}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/carranza23a/carranza23a.pdf}, url = {https://proceedings.mlr.press/v206/carranza23a.html}, abstract = {Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any function that explains the reward differences between actions, that is, the treatment effects. Motivated by this observation, building on recent work on oracle-based bandit algorithms, we provide the first reduction of contextual bandits to general-purpose heterogeneous treatment effect estimation, and we design a simple and computationally efficient algorithm based on this reduction. Our theoretical and experimental results demonstrate that heterogeneous treatment effect estimation in contextual bandits offers practical advantages over reward estimation including more efficient model estimation and greater flexibility to model misspecification.} }
Endnote
%0 Conference Paper %T Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles %A Aldo Gael Carranza %A Sanath Kumar Krishnamurthy %A Susan Athey %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-carranza23a %I PMLR %P 7190--7212 %U https://proceedings.mlr.press/v206/carranza23a.html %V 206 %X Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any function that explains the reward differences between actions, that is, the treatment effects. Motivated by this observation, building on recent work on oracle-based bandit algorithms, we provide the first reduction of contextual bandits to general-purpose heterogeneous treatment effect estimation, and we design a simple and computationally efficient algorithm based on this reduction. Our theoretical and experimental results demonstrate that heterogeneous treatment effect estimation in contextual bandits offers practical advantages over reward estimation including more efficient model estimation and greater flexibility to model misspecification.
APA
Carranza, A.G., Krishnamurthy, S.K. & Athey, S.. (2023). Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:7190-7212 Available from https://proceedings.mlr.press/v206/carranza23a.html.

Related Material