Pareto Regret Analyses in Multi-objective Multi-armed Bandit

Mengfan Xu, Diego Klabjan
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:38499-38517, 2023.

Abstract

We study Pareto optimality in multi-objective multi-armed bandit by providing a formulation of adversarial multi-objective multi-armed bandit and defining its Pareto regrets that can be applied to both stochastic and adversarial settings. The regrets do not rely on any scalarization functions and reflect Pareto optimality compared to scalarized regrets. We also present new algorithms assuming both with and without prior information of the multi-objective multi-armed bandit setting. The algorithms are shown optimal in adversarial settings and nearly optimal up to a logarithmic factor in stochastic settings simultaneously by our established upper bounds and lower bounds on Pareto regrets. Moreover, the lower bound analyses show that the new regrets are consistent with the existing Pareto regret for stochastic settings and extend an adversarial attack mechanism from bandit to the multi-objective one.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-xu23i, title = {Pareto Regret Analyses in Multi-objective Multi-armed Bandit}, author = {Xu, Mengfan and Klabjan, Diego}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {38499--38517}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/xu23i/xu23i.pdf}, url = {https://proceedings.mlr.press/v202/xu23i.html}, abstract = {We study Pareto optimality in multi-objective multi-armed bandit by providing a formulation of adversarial multi-objective multi-armed bandit and defining its Pareto regrets that can be applied to both stochastic and adversarial settings. The regrets do not rely on any scalarization functions and reflect Pareto optimality compared to scalarized regrets. We also present new algorithms assuming both with and without prior information of the multi-objective multi-armed bandit setting. The algorithms are shown optimal in adversarial settings and nearly optimal up to a logarithmic factor in stochastic settings simultaneously by our established upper bounds and lower bounds on Pareto regrets. Moreover, the lower bound analyses show that the new regrets are consistent with the existing Pareto regret for stochastic settings and extend an adversarial attack mechanism from bandit to the multi-objective one.} }
Endnote
%0 Conference Paper %T Pareto Regret Analyses in Multi-objective Multi-armed Bandit %A Mengfan Xu %A Diego Klabjan %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-xu23i %I PMLR %P 38499--38517 %U https://proceedings.mlr.press/v202/xu23i.html %V 202 %X We study Pareto optimality in multi-objective multi-armed bandit by providing a formulation of adversarial multi-objective multi-armed bandit and defining its Pareto regrets that can be applied to both stochastic and adversarial settings. The regrets do not rely on any scalarization functions and reflect Pareto optimality compared to scalarized regrets. We also present new algorithms assuming both with and without prior information of the multi-objective multi-armed bandit setting. The algorithms are shown optimal in adversarial settings and nearly optimal up to a logarithmic factor in stochastic settings simultaneously by our established upper bounds and lower bounds on Pareto regrets. Moreover, the lower bound analyses show that the new regrets are consistent with the existing Pareto regret for stochastic settings and extend an adversarial attack mechanism from bandit to the multi-objective one.
APA
Xu, M. & Klabjan, D.. (2023). Pareto Regret Analyses in Multi-objective Multi-armed Bandit. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:38499-38517 Available from https://proceedings.mlr.press/v202/xu23i.html.

Related Material