Approximately Stationary Bandits with Knapsacks

Giannis Fikioris, Éva Tardos
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:3758-3782, 2023.

Abstract

Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Bandits problem under global budget constraints, has received a lot of attention in recent years. It has numerous applications, including dynamic pricing, repeated auctions, ad allocation, network scheduling, etc. Previous work has focused on one of the two extremes: Stochastic BwK where the rewards and consumptions of the resources of each round are sampled from an i.i.d. distribution, and Adversarial BwK where these parameters are picked by an adversary. Achievable guarantees in the two cases exhibit a massive gap: No-regret learning is achievable in the stochastic case, but in the adversarial case only competitive ratio style guarantees are achievable, where the competitive ratio depends either on the budget or on both the time and the number of resources. What makes this gap so vast is that in Adversarial BwK the guarantees get worse in the typical case when the budget is more binding. While “best-of-both-worlds” type algorithms are known (single algorithms that provide the best achievable guarantee in each extreme case), their bounds degrade to the adversarial case as soon as the environment is not fully stochastic.Our work aims to bridge this gap, offering guarantees for a workload that is not exactly stochastic but is also not worst-case. We define a condition, Approximately Stationary BwK, that parameterizes how close to stochastic or adversarial an instance is. Based on these parameters, we explore what is the best competitive ratio attainable in BwL. We explore two algorithms that are oblivious to the values of the parameters but guarantee competitive ratios that smoothly transition between the best possible guarantees in the two extreme cases, depending on the values of the parameters. Our guarantees offer great improvement over the adversarial guarantee, especially when the available budget is small. We also prove bounds on the achievable guarantee, showing that our results are approximately tight when the budget is small.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-fikioris23a, title = {Approximately Stationary Bandits with Knapsacks}, author = {Fikioris, Giannis and Tardos, {\'E}va}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {3758--3782}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/fikioris23a/fikioris23a.pdf}, url = {https://proceedings.mlr.press/v195/fikioris23a.html}, abstract = {Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Bandits problem under global budget constraints, has received a lot of attention in recent years. It has numerous applications, including dynamic pricing, repeated auctions, ad allocation, network scheduling, etc. Previous work has focused on one of the two extremes: Stochastic BwK where the rewards and consumptions of the resources of each round are sampled from an i.i.d. distribution, and Adversarial BwK where these parameters are picked by an adversary. Achievable guarantees in the two cases exhibit a massive gap: No-regret learning is achievable in the stochastic case, but in the adversarial case only competitive ratio style guarantees are achievable, where the competitive ratio depends either on the budget or on both the time and the number of resources. What makes this gap so vast is that in Adversarial BwK the guarantees get worse in the typical case when the budget is more binding. While “best-of-both-worlds” type algorithms are known (single algorithms that provide the best achievable guarantee in each extreme case), their bounds degrade to the adversarial case as soon as the environment is not fully stochastic.Our work aims to bridge this gap, offering guarantees for a workload that is not exactly stochastic but is also not worst-case. We define a condition, Approximately Stationary BwK, that parameterizes how close to stochastic or adversarial an instance is. Based on these parameters, we explore what is the best competitive ratio attainable in BwL. We explore two algorithms that are oblivious to the values of the parameters but guarantee competitive ratios that smoothly transition between the best possible guarantees in the two extreme cases, depending on the values of the parameters. Our guarantees offer great improvement over the adversarial guarantee, especially when the available budget is small. We also prove bounds on the achievable guarantee, showing that our results are approximately tight when the budget is small.} }
Endnote
%0 Conference Paper %T Approximately Stationary Bandits with Knapsacks %A Giannis Fikioris %A Éva Tardos %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-fikioris23a %I PMLR %P 3758--3782 %U https://proceedings.mlr.press/v195/fikioris23a.html %V 195 %X Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Bandits problem under global budget constraints, has received a lot of attention in recent years. It has numerous applications, including dynamic pricing, repeated auctions, ad allocation, network scheduling, etc. Previous work has focused on one of the two extremes: Stochastic BwK where the rewards and consumptions of the resources of each round are sampled from an i.i.d. distribution, and Adversarial BwK where these parameters are picked by an adversary. Achievable guarantees in the two cases exhibit a massive gap: No-regret learning is achievable in the stochastic case, but in the adversarial case only competitive ratio style guarantees are achievable, where the competitive ratio depends either on the budget or on both the time and the number of resources. What makes this gap so vast is that in Adversarial BwK the guarantees get worse in the typical case when the budget is more binding. While “best-of-both-worlds” type algorithms are known (single algorithms that provide the best achievable guarantee in each extreme case), their bounds degrade to the adversarial case as soon as the environment is not fully stochastic.Our work aims to bridge this gap, offering guarantees for a workload that is not exactly stochastic but is also not worst-case. We define a condition, Approximately Stationary BwK, that parameterizes how close to stochastic or adversarial an instance is. Based on these parameters, we explore what is the best competitive ratio attainable in BwL. We explore two algorithms that are oblivious to the values of the parameters but guarantee competitive ratios that smoothly transition between the best possible guarantees in the two extreme cases, depending on the values of the parameters. Our guarantees offer great improvement over the adversarial guarantee, especially when the available budget is small. We also prove bounds on the achievable guarantee, showing that our results are approximately tight when the budget is small.
APA
Fikioris, G. & Tardos, É.. (2023). Approximately Stationary Bandits with Knapsacks. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:3758-3782 Available from https://proceedings.mlr.press/v195/fikioris23a.html.

Related Material