LITE: Efficiently Estimating Gaussian Probability of Maximality

Nicolas Menet, Jonas Hübotter, Parnian Kassraie, Andreas Krause
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4996-5004, 2025.

Abstract

We consider the problem of computing the \emph{probability of maximality} (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with \emph{almost-linear time and memory} complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-menet25a, title = {LITE: Efficiently Estimating Gaussian Probability of Maximality}, author = {Menet, Nicolas and H{\"u}botter, Jonas and Kassraie, Parnian and Krause, Andreas}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4996--5004}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/menet25a/menet25a.pdf}, url = {https://proceedings.mlr.press/v258/menet25a.html}, abstract = {We consider the problem of computing the \emph{probability of maximality} (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with \emph{almost-linear time and memory} complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.} }
Endnote
%0 Conference Paper %T LITE: Efficiently Estimating Gaussian Probability of Maximality %A Nicolas Menet %A Jonas Hübotter %A Parnian Kassraie %A Andreas Krause %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-menet25a %I PMLR %P 4996--5004 %U https://proceedings.mlr.press/v258/menet25a.html %V 258 %X We consider the problem of computing the \emph{probability of maximality} (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with \emph{almost-linear time and memory} complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.
APA
Menet, N., Hübotter, J., Kassraie, P. & Krause, A.. (2025). LITE: Efficiently Estimating Gaussian Probability of Maximality. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4996-5004 Available from https://proceedings.mlr.press/v258/menet25a.html.

Related Material