Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms

Meltem Tatlı, Arpan Mukherjee, Prashanth L. A., Karthikeyan Shanmugam, Ali Tajer
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3871-3879, 2025.

Abstract

This paper introduces a general framework for risk-sensitive bandits that integrates the notions of risk-sensitive objectives by adopting a rich class of {\em distortion riskmetrics}. The introduced framework subsumes the various existing risk-sensitive models. An important and hitherto unknown observation is that for a wide range of riskmetrics, the optimal bandit policy involves selecting a \emph{mixture} of arms. This is in sharp contrast to the convention in the multi-arm bandit algorithms that there is generally a \emph{solitary} arm that maximizes the utility, whether purely reward-centric or risk-sensitive. This creates a major departure from the principles for designing bandit algorithms since there are uncountable mixture possibilities. The contributions of the paper are as follows: (i) it formalizes a general framework for risk-sensitive bandits, (ii) identifies standard risk-sensitive bandit models for which solitary arm selections is not optimal, (iii) and designs regret-efficient algorithms whose sampling strategies can accurately track optimal arm mixtures (when mixture is optimal) or the solitary arms (when solitary is optimal). The algorithms are shown to achieve a regret that scales according to $O((\log T/T )^{\nu})$, where $T$ is the horizon, and $\nu>0$ is a riskmetric-specific constant.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-tatli25a, title = {Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms}, author = {Tatl{\i}, Meltem and Mukherjee, Arpan and A., Prashanth L. and Shanmugam, Karthikeyan and Tajer, Ali}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3871--3879}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/tatli25a/tatli25a.pdf}, url = {https://proceedings.mlr.press/v258/tatli25a.html}, abstract = {This paper introduces a general framework for risk-sensitive bandits that integrates the notions of risk-sensitive objectives by adopting a rich class of {\em distortion riskmetrics}. The introduced framework subsumes the various existing risk-sensitive models. An important and hitherto unknown observation is that for a wide range of riskmetrics, the optimal bandit policy involves selecting a \emph{mixture} of arms. This is in sharp contrast to the convention in the multi-arm bandit algorithms that there is generally a \emph{solitary} arm that maximizes the utility, whether purely reward-centric or risk-sensitive. This creates a major departure from the principles for designing bandit algorithms since there are uncountable mixture possibilities. The contributions of the paper are as follows: (i) it formalizes a general framework for risk-sensitive bandits, (ii) identifies standard risk-sensitive bandit models for which solitary arm selections is not optimal, (iii) and designs regret-efficient algorithms whose sampling strategies can accurately track optimal arm mixtures (when mixture is optimal) or the solitary arms (when solitary is optimal). The algorithms are shown to achieve a regret that scales according to $O((\log T/T )^{\nu})$, where $T$ is the horizon, and $\nu>0$ is a riskmetric-specific constant.} }
Endnote
%0 Conference Paper %T Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms %A Meltem Tatlı %A Arpan Mukherjee %A Prashanth L. A. %A Karthikeyan Shanmugam %A Ali Tajer %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-tatli25a %I PMLR %P 3871--3879 %U https://proceedings.mlr.press/v258/tatli25a.html %V 258 %X This paper introduces a general framework for risk-sensitive bandits that integrates the notions of risk-sensitive objectives by adopting a rich class of {\em distortion riskmetrics}. The introduced framework subsumes the various existing risk-sensitive models. An important and hitherto unknown observation is that for a wide range of riskmetrics, the optimal bandit policy involves selecting a \emph{mixture} of arms. This is in sharp contrast to the convention in the multi-arm bandit algorithms that there is generally a \emph{solitary} arm that maximizes the utility, whether purely reward-centric or risk-sensitive. This creates a major departure from the principles for designing bandit algorithms since there are uncountable mixture possibilities. The contributions of the paper are as follows: (i) it formalizes a general framework for risk-sensitive bandits, (ii) identifies standard risk-sensitive bandit models for which solitary arm selections is not optimal, (iii) and designs regret-efficient algorithms whose sampling strategies can accurately track optimal arm mixtures (when mixture is optimal) or the solitary arms (when solitary is optimal). The algorithms are shown to achieve a regret that scales according to $O((\log T/T )^{\nu})$, where $T$ is the horizon, and $\nu>0$ is a riskmetric-specific constant.
APA
Tatlı, M., Mukherjee, A., A., P.L., Shanmugam, K. & Tajer, A.. (2025). Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3871-3879 Available from https://proceedings.mlr.press/v258/tatli25a.html.

Related Material