Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics

Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3642-3643, 2024.

Abstract

We study a game between autobidding algorithms that compete in an online advertising platform. Each autobidder is tasked with maximizing its advertiser’s total value over multiple rounds of a repeated auction, subject to budget and return-on-investment constraints. We propose a gradient-based learning algorithm that is guaranteed to satisfy all constraints and achieves vanishing individual regret. Our algorithm uses only bandit feedback and can be used with the first- or second-price auction, as well as with any “intermediate” auction format. Our main result is that when these autobidders play against each other, the resulting expected liquid welfare over all rounds is at least half of the expected optimal liquid welfare achieved by any allocation. Our analysis holds whether or not the bidding dynamics converges to an equilibrium, side-stepping the dearth of provable convergence guarantees in the literature and the hardness result (Chen et al., 2021) which precludes such guarantees for budget-constrained second-price auctions. Our vanishing-regret result extends to an adversarial environment, without any assumptions on the other agents. We adopt a non-standard benchmark: the sequence of bids such that each bid $b_t$ maximizes value for the environment in round $t$. Hence, we side-step the impossibility results for the standard benchmark of best fixed bid (Balseiro and Gur, 2019). When there is only a budget constraint, our algorithm specializes to the autobidding algorithm from (Balseiro and Gur, 2019), and our guarantees specialize to the regret and liquid welfare guarantees from Gaitonde et al. (2023). While our approach to bounding liquid welfare shares a common high-level strategy with Gaitonde et al. (2023), handling the ROI constraint, and particularly both constraints jointly, introduces a variety of new technical challenges. These challenges necessitate a new algorithm, changes to the way liquid welfare bounds are established, and a different methodology for establishing regret properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-lucier24a, title = {Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics}, author = {Lucier, Brendan and Pattathil, Sarath and Slivkins, Aleksandrs and Zhang, Mengxiao}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3642--3643}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/lucier24a/lucier24a.pdf}, url = {https://proceedings.mlr.press/v247/lucier24a.html}, abstract = {We study a game between autobidding algorithms that compete in an online advertising platform. Each autobidder is tasked with maximizing its advertiser’s total value over multiple rounds of a repeated auction, subject to budget and return-on-investment constraints. We propose a gradient-based learning algorithm that is guaranteed to satisfy all constraints and achieves vanishing individual regret. Our algorithm uses only bandit feedback and can be used with the first- or second-price auction, as well as with any “intermediate” auction format. Our main result is that when these autobidders play against each other, the resulting expected liquid welfare over all rounds is at least half of the expected optimal liquid welfare achieved by any allocation. Our analysis holds whether or not the bidding dynamics converges to an equilibrium, side-stepping the dearth of provable convergence guarantees in the literature and the hardness result (Chen et al., 2021) which precludes such guarantees for budget-constrained second-price auctions. Our vanishing-regret result extends to an adversarial environment, without any assumptions on the other agents. We adopt a non-standard benchmark: the sequence of bids such that each bid $b_t$ maximizes value for the environment in round $t$. Hence, we side-step the impossibility results for the standard benchmark of best fixed bid (Balseiro and Gur, 2019). When there is only a budget constraint, our algorithm specializes to the autobidding algorithm from (Balseiro and Gur, 2019), and our guarantees specialize to the regret and liquid welfare guarantees from Gaitonde et al. (2023). While our approach to bounding liquid welfare shares a common high-level strategy with Gaitonde et al. (2023), handling the ROI constraint, and particularly both constraints jointly, introduces a variety of new technical challenges. These challenges necessitate a new algorithm, changes to the way liquid welfare bounds are established, and a different methodology for establishing regret properties.} }
Endnote
%0 Conference Paper %T Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics %A Brendan Lucier %A Sarath Pattathil %A Aleksandrs Slivkins %A Mengxiao Zhang %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-lucier24a %I PMLR %P 3642--3643 %U https://proceedings.mlr.press/v247/lucier24a.html %V 247 %X We study a game between autobidding algorithms that compete in an online advertising platform. Each autobidder is tasked with maximizing its advertiser’s total value over multiple rounds of a repeated auction, subject to budget and return-on-investment constraints. We propose a gradient-based learning algorithm that is guaranteed to satisfy all constraints and achieves vanishing individual regret. Our algorithm uses only bandit feedback and can be used with the first- or second-price auction, as well as with any “intermediate” auction format. Our main result is that when these autobidders play against each other, the resulting expected liquid welfare over all rounds is at least half of the expected optimal liquid welfare achieved by any allocation. Our analysis holds whether or not the bidding dynamics converges to an equilibrium, side-stepping the dearth of provable convergence guarantees in the literature and the hardness result (Chen et al., 2021) which precludes such guarantees for budget-constrained second-price auctions. Our vanishing-regret result extends to an adversarial environment, without any assumptions on the other agents. We adopt a non-standard benchmark: the sequence of bids such that each bid $b_t$ maximizes value for the environment in round $t$. Hence, we side-step the impossibility results for the standard benchmark of best fixed bid (Balseiro and Gur, 2019). When there is only a budget constraint, our algorithm specializes to the autobidding algorithm from (Balseiro and Gur, 2019), and our guarantees specialize to the regret and liquid welfare guarantees from Gaitonde et al. (2023). While our approach to bounding liquid welfare shares a common high-level strategy with Gaitonde et al. (2023), handling the ROI constraint, and particularly both constraints jointly, introduces a variety of new technical challenges. These challenges necessitate a new algorithm, changes to the way liquid welfare bounds are established, and a different methodology for establishing regret properties.
APA
Lucier, B., Pattathil, S., Slivkins, A. & Zhang, M.. (2024). Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3642-3643 Available from https://proceedings.mlr.press/v247/lucier24a.html.

Related Material