Confidence-Budget Matching for Sequential Budgeted Learning

Yonathan Efroni, Nadav Merlis, Aadirupa Saha, Shie Mannor
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2937-2947, 2021.

Abstract

A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of ‘greedy’ algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-efroni21a, title = {Confidence-Budget Matching for Sequential Budgeted Learning}, author = {Efroni, Yonathan and Merlis, Nadav and Saha, Aadirupa and Mannor, Shie}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2937--2947}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/efroni21a/efroni21a.pdf}, url = {https://proceedings.mlr.press/v139/efroni21a.html}, abstract = {A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of ‘greedy’ algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.} }
Endnote
%0 Conference Paper %T Confidence-Budget Matching for Sequential Budgeted Learning %A Yonathan Efroni %A Nadav Merlis %A Aadirupa Saha %A Shie Mannor %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-efroni21a %I PMLR %P 2937--2947 %U https://proceedings.mlr.press/v139/efroni21a.html %V 139 %X A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of ‘greedy’ algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.
APA
Efroni, Y., Merlis, N., Saha, A. & Mannor, S.. (2021). Confidence-Budget Matching for Sequential Budgeted Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2937-2947 Available from https://proceedings.mlr.press/v139/efroni21a.html.

Related Material