Debiasing Samples from Online Learning Using Bootstrap

Ningyuan Chen, Xuefeng Gao, Yi Xiong
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8514-8533, 2022.

Abstract

It has been recently shown in the literature (Nie et al, 2018; Shin et al, 2019a,b) that the sample averages from online learning experiments are biased when used to estimate the mean reward. To correct the bias, off-policy evaluation methods, including importance sampling and doubly robust estimators, typically calculate the conditional propensity score, which is ill-defined for non-randomized policies such as UCB. This paper provides a procedure to debias the samples using bootstrap, which doesn’t require the knowledge of the reward distribution and can be applied to any adaptive policies. Numerical experiments demonstrate the effective bias reduction for samples generated by popular multi-armed bandit algorithms such as Explore-Then-Commit (ETC), UCB, Thompson sampling (TS) and $\epsilon$-greedy (EG). We analyze and provide theoretical justifications for the procedure under the ETC algorithm, including the asymptotic convergence of the bias decay rate in the real and bootstrap worlds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-chen22g, title = { Debiasing Samples from Online Learning Using Bootstrap }, author = {Chen, Ningyuan and Gao, Xuefeng and Xiong, Yi}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8514--8533}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/chen22g/chen22g.pdf}, url = {https://proceedings.mlr.press/v151/chen22g.html}, abstract = { It has been recently shown in the literature (Nie et al, 2018; Shin et al, 2019a,b) that the sample averages from online learning experiments are biased when used to estimate the mean reward. To correct the bias, off-policy evaluation methods, including importance sampling and doubly robust estimators, typically calculate the conditional propensity score, which is ill-defined for non-randomized policies such as UCB. This paper provides a procedure to debias the samples using bootstrap, which doesn’t require the knowledge of the reward distribution and can be applied to any adaptive policies. Numerical experiments demonstrate the effective bias reduction for samples generated by popular multi-armed bandit algorithms such as Explore-Then-Commit (ETC), UCB, Thompson sampling (TS) and $\epsilon$-greedy (EG). We analyze and provide theoretical justifications for the procedure under the ETC algorithm, including the asymptotic convergence of the bias decay rate in the real and bootstrap worlds. } }
Endnote
%0 Conference Paper %T Debiasing Samples from Online Learning Using Bootstrap %A Ningyuan Chen %A Xuefeng Gao %A Yi Xiong %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-chen22g %I PMLR %P 8514--8533 %U https://proceedings.mlr.press/v151/chen22g.html %V 151 %X It has been recently shown in the literature (Nie et al, 2018; Shin et al, 2019a,b) that the sample averages from online learning experiments are biased when used to estimate the mean reward. To correct the bias, off-policy evaluation methods, including importance sampling and doubly robust estimators, typically calculate the conditional propensity score, which is ill-defined for non-randomized policies such as UCB. This paper provides a procedure to debias the samples using bootstrap, which doesn’t require the knowledge of the reward distribution and can be applied to any adaptive policies. Numerical experiments demonstrate the effective bias reduction for samples generated by popular multi-armed bandit algorithms such as Explore-Then-Commit (ETC), UCB, Thompson sampling (TS) and $\epsilon$-greedy (EG). We analyze and provide theoretical justifications for the procedure under the ETC algorithm, including the asymptotic convergence of the bias decay rate in the real and bootstrap worlds.
APA
Chen, N., Gao, X. & Xiong, Y.. (2022). Debiasing Samples from Online Learning Using Bootstrap . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8514-8533 Available from https://proceedings.mlr.press/v151/chen22g.html.

Related Material