Exploration vs Exploitation vs Safety: Risk-Aware Multi-Armed Bandits

[edit]

Nicolas Galichet, Michèle Sebag, Olivier Teytaud ;
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:245-260, 2013.

Abstract

Motivated by applications in energy management, this paper presents the Multi-Armed Risk-Aware Bandit (MaRaB) algorithm. With the goal of limiting the exploration of risky arms, MaRaB takes as arm quality its conditional value at risk. When the user-supplied risk level goes to 0, the arm quality tends toward the essential infimum of the arm distribution density, and MaRaB tends toward the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal value. As a first contribution, this paper presents a theoretical analysis of the MIN algorithm under mild assumptions, establishing its robustness comparatively to UCB. The analysis is supported by extensive experimental validation of MIN and MaRaB compared to UCB and state-of-art risk-aware MAB algorithms on artificial and real-world problems.

Related Material