Online Learning with Stochastically Partitioning Experts

Puranjay Datta, Sharayu Moharir, Jaya Prakash Champati
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:882-896, 2025.

Abstract

We study a variant of the experts problem in which new experts are revealed over time according to a stochastic process. The experts are represented by partitions of a hypercube $\mathbb{B}$ in $d$-dimensional Euclidean space. In each round, a point is drawn from $\mathbb{B}$ in an independent and identically distributed manner using an unknown distribution. For each chosen point, we draw $d$ orthogonal hyperplanes parallel to the $d$ faces of $\mathbb{B}$ passing through the point. The set of experts available in a round is the set of partitions of $\mathbb{B}$ created by all the hyperplanes drawn up to that point. Losses are adversarial, and the performance metrics of interest include expected regret and high probability bounds on the sample-path regret. We propose a suitably adapted version of the Hedge algorithm called Hedge-G, which uses a constant learning rate and has $O(\sqrt{2^d T \log T})$ expected regret, which is order-optimal. Further, we show that for Hedge-G, there exists a trade-off between choosing a learning rate that has optimal expected regret and a learning rate that leads to a high probability sample-path regret bound. We address this limitation by proposing AdaHedge-G, a variant of Hedge-G that uses an adaptive learning rate by tracking the loss of the experts revealed up to that round. AdaHedge-G simultaneously achieves $O(\log(\log T)\sqrt{ T \log T})$ expected regret and $O(\log T \sqrt{T \log T})$ sample-path regret, with probability at least $1-T^{-c}$, where $c > 0$ is a constant dependent on $d$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-datta25a, title = {Online Learning with Stochastically Partitioning Experts}, author = {Datta, Puranjay and Moharir, Sharayu and Champati, Jaya Prakash}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {882--896}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/datta25a/datta25a.pdf}, url = {https://proceedings.mlr.press/v286/datta25a.html}, abstract = {We study a variant of the experts problem in which new experts are revealed over time according to a stochastic process. The experts are represented by partitions of a hypercube $\mathbb{B}$ in $d$-dimensional Euclidean space. In each round, a point is drawn from $\mathbb{B}$ in an independent and identically distributed manner using an unknown distribution. For each chosen point, we draw $d$ orthogonal hyperplanes parallel to the $d$ faces of $\mathbb{B}$ passing through the point. The set of experts available in a round is the set of partitions of $\mathbb{B}$ created by all the hyperplanes drawn up to that point. Losses are adversarial, and the performance metrics of interest include expected regret and high probability bounds on the sample-path regret. We propose a suitably adapted version of the Hedge algorithm called Hedge-G, which uses a constant learning rate and has $O(\sqrt{2^d T \log T})$ expected regret, which is order-optimal. Further, we show that for Hedge-G, there exists a trade-off between choosing a learning rate that has optimal expected regret and a learning rate that leads to a high probability sample-path regret bound. We address this limitation by proposing AdaHedge-G, a variant of Hedge-G that uses an adaptive learning rate by tracking the loss of the experts revealed up to that round. AdaHedge-G simultaneously achieves $O(\log(\log T)\sqrt{ T \log T})$ expected regret and $O(\log T \sqrt{T \log T})$ sample-path regret, with probability at least $1-T^{-c}$, where $c > 0$ is a constant dependent on $d$.} }
Endnote
%0 Conference Paper %T Online Learning with Stochastically Partitioning Experts %A Puranjay Datta %A Sharayu Moharir %A Jaya Prakash Champati %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-datta25a %I PMLR %P 882--896 %U https://proceedings.mlr.press/v286/datta25a.html %V 286 %X We study a variant of the experts problem in which new experts are revealed over time according to a stochastic process. The experts are represented by partitions of a hypercube $\mathbb{B}$ in $d$-dimensional Euclidean space. In each round, a point is drawn from $\mathbb{B}$ in an independent and identically distributed manner using an unknown distribution. For each chosen point, we draw $d$ orthogonal hyperplanes parallel to the $d$ faces of $\mathbb{B}$ passing through the point. The set of experts available in a round is the set of partitions of $\mathbb{B}$ created by all the hyperplanes drawn up to that point. Losses are adversarial, and the performance metrics of interest include expected regret and high probability bounds on the sample-path regret. We propose a suitably adapted version of the Hedge algorithm called Hedge-G, which uses a constant learning rate and has $O(\sqrt{2^d T \log T})$ expected regret, which is order-optimal. Further, we show that for Hedge-G, there exists a trade-off between choosing a learning rate that has optimal expected regret and a learning rate that leads to a high probability sample-path regret bound. We address this limitation by proposing AdaHedge-G, a variant of Hedge-G that uses an adaptive learning rate by tracking the loss of the experts revealed up to that round. AdaHedge-G simultaneously achieves $O(\log(\log T)\sqrt{ T \log T})$ expected regret and $O(\log T \sqrt{T \log T})$ sample-path regret, with probability at least $1-T^{-c}$, where $c > 0$ is a constant dependent on $d$.
APA
Datta, P., Moharir, S. & Champati, J.P.. (2025). Online Learning with Stochastically Partitioning Experts. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:882-896 Available from https://proceedings.mlr.press/v286/datta25a.html.

Related Material