Multi-objective Contextual Bandit Problem with Similarity Information

Eralp Turgay, Doruk Oner, Cem Tekin
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1673-1681, 2018.

Abstract

In this paper we propose the multi-objective contextual bandit problem with similarity information. This problem extends the classical contextual bandit problem with similarity information by introducing multiple and possibly conflicting objectives. Since the best arm in each objective can be different given the context, learning the best arm based on a single objective can jeopardize the rewards obtained from the other objectives. To handle this issue, we define a new performance metric, called the contextual Pareto regret, to evaluate the performance of the learner. Essentially, the contextual Pareto regret is the sum of the distances of the arms chosen by the learner to the context dependent Pareto front. For this problem, we develop a new online learning algorithm called Pareto Contextual Zooming (PCZ), which exploits the idea of contextual zooming to learn the arms that are close to the Pareto front for each observed context by adaptively partitioning the joint context-arm set according to the observed rewards and locations of the context-arm pairs selected in the past. Then, we prove that PCZ achieves $\tilde O (T^{(1+d_p)/(2+d_p)})$ Pareto regret where $d_p$ is the Pareto zooming dimension that depends on the size of the set of near-optimal context-arm pairs. Moreover, we show that this regret bound is nearly optimal by providing an almost matching $Ω(T^{(1+d_p)/(2+d_p)})$ lower bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-turgay18a, title = {Multi-objective Contextual Bandit Problem with Similarity Information}, author = {Turgay, Eralp and Oner, Doruk and Tekin, Cem}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1673--1681}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/turgay18a/turgay18a.pdf}, url = {https://proceedings.mlr.press/v84/turgay18a.html}, abstract = {In this paper we propose the multi-objective contextual bandit problem with similarity information. This problem extends the classical contextual bandit problem with similarity information by introducing multiple and possibly conflicting objectives. Since the best arm in each objective can be different given the context, learning the best arm based on a single objective can jeopardize the rewards obtained from the other objectives. To handle this issue, we define a new performance metric, called the contextual Pareto regret, to evaluate the performance of the learner. Essentially, the contextual Pareto regret is the sum of the distances of the arms chosen by the learner to the context dependent Pareto front. For this problem, we develop a new online learning algorithm called Pareto Contextual Zooming (PCZ), which exploits the idea of contextual zooming to learn the arms that are close to the Pareto front for each observed context by adaptively partitioning the joint context-arm set according to the observed rewards and locations of the context-arm pairs selected in the past. Then, we prove that PCZ achieves $\tilde O (T^{(1+d_p)/(2+d_p)})$ Pareto regret where $d_p$ is the Pareto zooming dimension that depends on the size of the set of near-optimal context-arm pairs. Moreover, we show that this regret bound is nearly optimal by providing an almost matching $Ω(T^{(1+d_p)/(2+d_p)})$ lower bound.} }
Endnote
%0 Conference Paper %T Multi-objective Contextual Bandit Problem with Similarity Information %A Eralp Turgay %A Doruk Oner %A Cem Tekin %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-turgay18a %I PMLR %P 1673--1681 %U https://proceedings.mlr.press/v84/turgay18a.html %V 84 %X In this paper we propose the multi-objective contextual bandit problem with similarity information. This problem extends the classical contextual bandit problem with similarity information by introducing multiple and possibly conflicting objectives. Since the best arm in each objective can be different given the context, learning the best arm based on a single objective can jeopardize the rewards obtained from the other objectives. To handle this issue, we define a new performance metric, called the contextual Pareto regret, to evaluate the performance of the learner. Essentially, the contextual Pareto regret is the sum of the distances of the arms chosen by the learner to the context dependent Pareto front. For this problem, we develop a new online learning algorithm called Pareto Contextual Zooming (PCZ), which exploits the idea of contextual zooming to learn the arms that are close to the Pareto front for each observed context by adaptively partitioning the joint context-arm set according to the observed rewards and locations of the context-arm pairs selected in the past. Then, we prove that PCZ achieves $\tilde O (T^{(1+d_p)/(2+d_p)})$ Pareto regret where $d_p$ is the Pareto zooming dimension that depends on the size of the set of near-optimal context-arm pairs. Moreover, we show that this regret bound is nearly optimal by providing an almost matching $Ω(T^{(1+d_p)/(2+d_p)})$ lower bound.
APA
Turgay, E., Oner, D. & Tekin, C.. (2018). Multi-objective Contextual Bandit Problem with Similarity Information. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1673-1681 Available from https://proceedings.mlr.press/v84/turgay18a.html.

Related Material