Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality

Dhruv Malik, Conor Igoe, Yuanzhi Li, Aarti Singh
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23590-23609, 2023.

Abstract

In human-interactive applications of online learning, a human’s preferences or abilities are often a function of the algorithm’s recent actions. Motivated by this, a significant line of work has formalized settings where an action’s loss is a function of the number of times it was played in the prior $m$ timesteps, where $m$ corresponds to a bound on human memory capacity. To more faithfully capture decay of human memory with time, we introduce the Weighted Tallying Bandit (WTB), which generalizes this setting by requiring that an action’s loss is a function of a weighted summation of the number of times it was played in the last $m$ timesteps. WTB is intractable without further assumption. So we study it under Repeated Exposure Optimality (REO), a condition requiring the existence of an action that when repetitively played will eventually yield smaller loss than any other action sequence. We study the minimization of complete policy regret (CPR), which is the strongest notion of regret, in WTB under REO. Since $m$ is often unknown, we only assume access to an upper bound $M$ on $m$. We show that for problems with $K$ actions and horizon $T$, a simple modification of the successive elimination algorithm has $\mathcal{O} \left( \sqrt{KT} + (m+M)K \right)$ CPR. Upto an additive (in lieu of mutliplicative) factor in $(m+M)K$, this recovers the classical guarantee for the far simpler stochastic multi-armed bandit with traditional regret. We additionally show that in our setting, any algorithm will suffer additive CPR of $\Omega \left( mK + M \right)$, demonstrating our result is near optimal. Our method is computationally efficient, and we experimentally demonstrate its practicality and superiority over various baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-malik23a, title = {Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality}, author = {Malik, Dhruv and Igoe, Conor and Li, Yuanzhi and Singh, Aarti}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23590--23609}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/malik23a/malik23a.pdf}, url = {https://proceedings.mlr.press/v202/malik23a.html}, abstract = {In human-interactive applications of online learning, a human’s preferences or abilities are often a function of the algorithm’s recent actions. Motivated by this, a significant line of work has formalized settings where an action’s loss is a function of the number of times it was played in the prior $m$ timesteps, where $m$ corresponds to a bound on human memory capacity. To more faithfully capture decay of human memory with time, we introduce the Weighted Tallying Bandit (WTB), which generalizes this setting by requiring that an action’s loss is a function of a weighted summation of the number of times it was played in the last $m$ timesteps. WTB is intractable without further assumption. So we study it under Repeated Exposure Optimality (REO), a condition requiring the existence of an action that when repetitively played will eventually yield smaller loss than any other action sequence. We study the minimization of complete policy regret (CPR), which is the strongest notion of regret, in WTB under REO. Since $m$ is often unknown, we only assume access to an upper bound $M$ on $m$. We show that for problems with $K$ actions and horizon $T$, a simple modification of the successive elimination algorithm has $\mathcal{O} \left( \sqrt{KT} + (m+M)K \right)$ CPR. Upto an additive (in lieu of mutliplicative) factor in $(m+M)K$, this recovers the classical guarantee for the far simpler stochastic multi-armed bandit with traditional regret. We additionally show that in our setting, any algorithm will suffer additive CPR of $\Omega \left( mK + M \right)$, demonstrating our result is near optimal. Our method is computationally efficient, and we experimentally demonstrate its practicality and superiority over various baselines.} }
Endnote
%0 Conference Paper %T Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality %A Dhruv Malik %A Conor Igoe %A Yuanzhi Li %A Aarti Singh %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-malik23a %I PMLR %P 23590--23609 %U https://proceedings.mlr.press/v202/malik23a.html %V 202 %X In human-interactive applications of online learning, a human’s preferences or abilities are often a function of the algorithm’s recent actions. Motivated by this, a significant line of work has formalized settings where an action’s loss is a function of the number of times it was played in the prior $m$ timesteps, where $m$ corresponds to a bound on human memory capacity. To more faithfully capture decay of human memory with time, we introduce the Weighted Tallying Bandit (WTB), which generalizes this setting by requiring that an action’s loss is a function of a weighted summation of the number of times it was played in the last $m$ timesteps. WTB is intractable without further assumption. So we study it under Repeated Exposure Optimality (REO), a condition requiring the existence of an action that when repetitively played will eventually yield smaller loss than any other action sequence. We study the minimization of complete policy regret (CPR), which is the strongest notion of regret, in WTB under REO. Since $m$ is often unknown, we only assume access to an upper bound $M$ on $m$. We show that for problems with $K$ actions and horizon $T$, a simple modification of the successive elimination algorithm has $\mathcal{O} \left( \sqrt{KT} + (m+M)K \right)$ CPR. Upto an additive (in lieu of mutliplicative) factor in $(m+M)K$, this recovers the classical guarantee for the far simpler stochastic multi-armed bandit with traditional regret. We additionally show that in our setting, any algorithm will suffer additive CPR of $\Omega \left( mK + M \right)$, demonstrating our result is near optimal. Our method is computationally efficient, and we experimentally demonstrate its practicality and superiority over various baselines.
APA
Malik, D., Igoe, C., Li, Y. & Singh, A.. (2023). Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23590-23609 Available from https://proceedings.mlr.press/v202/malik23a.html.

Related Material