Multi-armed Bandits with Missing Outcomes

Ilia Mahrooghi, Mahshad Moradi, Sina Akbari, Negar Kiyavash
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2844-2875, 2025.

Abstract

While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, with missing outcomes perhaps among the most challenging ones. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-mahrooghi25a, title = {Multi-armed Bandits with Missing Outcomes}, author = {Mahrooghi, Ilia and Moradi, Mahshad and Akbari, Sina and Kiyavash, Negar}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2844--2875}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/mahrooghi25a/mahrooghi25a.pdf}, url = {https://proceedings.mlr.press/v286/mahrooghi25a.html}, abstract = {While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, with missing outcomes perhaps among the most challenging ones. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings.} }
Endnote
%0 Conference Paper %T Multi-armed Bandits with Missing Outcomes %A Ilia Mahrooghi %A Mahshad Moradi %A Sina Akbari %A Negar Kiyavash %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-mahrooghi25a %I PMLR %P 2844--2875 %U https://proceedings.mlr.press/v286/mahrooghi25a.html %V 286 %X While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, with missing outcomes perhaps among the most challenging ones. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings.
APA
Mahrooghi, I., Moradi, M., Akbari, S. & Kiyavash, N.. (2025). Multi-armed Bandits with Missing Outcomes. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2844-2875 Available from https://proceedings.mlr.press/v286/mahrooghi25a.html.

Related Material