k-experts - Online Policies and Fundamental Limits

Samrat Mukhopadhyay, Sourav Sahoo, Abhishek Sinha
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:342-365, 2022.

Abstract

We introduce the k-experts problem - a generalization of the classic Prediction with Expert’s Advice framework. Unlike the classic version, where the learner selects exactly one expert from a pool of N experts at each round, in this problem, the learner selects a subset of k experts at each round (1<= k <= N). The reward obtained by the learner at each round is assumed to be a function of the k selected experts. The primary objective is to design an online learning policy with a small regret. In this pursuit, we propose SAGE (Sampled Hedge) - a framework for designing efficient online learning policies by leveraging statistical sampling techniques. For a wide class of reward functions, we show that SAGE either achieves the first sublinear regret guarantee or improves upon the existing ones. Furthermore, going beyond the notion of regret, we fully characterize the mistake bounds achievable by online learning policies for stable loss functions. We conclude the paper by establishing a tight regret lower bound for a variant of the k-experts problem and carrying out experiments with standard datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-mukhopadhyay22a, title = { k-experts - Online Policies and Fundamental Limits }, author = {Mukhopadhyay, Samrat and Sahoo, Sourav and Sinha, Abhishek}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {342--365}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/mukhopadhyay22a/mukhopadhyay22a.pdf}, url = {https://proceedings.mlr.press/v151/mukhopadhyay22a.html}, abstract = { We introduce the k-experts problem - a generalization of the classic Prediction with Expert’s Advice framework. Unlike the classic version, where the learner selects exactly one expert from a pool of N experts at each round, in this problem, the learner selects a subset of k experts at each round (1<= k <= N). The reward obtained by the learner at each round is assumed to be a function of the k selected experts. The primary objective is to design an online learning policy with a small regret. In this pursuit, we propose SAGE (Sampled Hedge) - a framework for designing efficient online learning policies by leveraging statistical sampling techniques. For a wide class of reward functions, we show that SAGE either achieves the first sublinear regret guarantee or improves upon the existing ones. Furthermore, going beyond the notion of regret, we fully characterize the mistake bounds achievable by online learning policies for stable loss functions. We conclude the paper by establishing a tight regret lower bound for a variant of the k-experts problem and carrying out experiments with standard datasets. } }
Endnote
%0 Conference Paper %T k-experts - Online Policies and Fundamental Limits %A Samrat Mukhopadhyay %A Sourav Sahoo %A Abhishek Sinha %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-mukhopadhyay22a %I PMLR %P 342--365 %U https://proceedings.mlr.press/v151/mukhopadhyay22a.html %V 151 %X We introduce the k-experts problem - a generalization of the classic Prediction with Expert’s Advice framework. Unlike the classic version, where the learner selects exactly one expert from a pool of N experts at each round, in this problem, the learner selects a subset of k experts at each round (1<= k <= N). The reward obtained by the learner at each round is assumed to be a function of the k selected experts. The primary objective is to design an online learning policy with a small regret. In this pursuit, we propose SAGE (Sampled Hedge) - a framework for designing efficient online learning policies by leveraging statistical sampling techniques. For a wide class of reward functions, we show that SAGE either achieves the first sublinear regret guarantee or improves upon the existing ones. Furthermore, going beyond the notion of regret, we fully characterize the mistake bounds achievable by online learning policies for stable loss functions. We conclude the paper by establishing a tight regret lower bound for a variant of the k-experts problem and carrying out experiments with standard datasets.
APA
Mukhopadhyay, S., Sahoo, S. & Sinha, A.. (2022). k-experts - Online Policies and Fundamental Limits . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:342-365 Available from https://proceedings.mlr.press/v151/mukhopadhyay22a.html.

Related Material