A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence

Alex R. Dytso, Mario Goldenbaum, H. Vincent Poor, Shlomo Shamai
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8080-8094, 2022.

Abstract

A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution. The Bayesian estimator induced by a least favorable prior, under mild conditions, is then known to be minimax. However, finding least favorable distributions can be challenging due to inherent optimization over the space of probability distributions, which is infinite-dimensional. This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension. The benefit of this dimensionality reduction is that it permits the use of popular algorithms such as projected gradient ascent to find least favorable priors. Throughout the paper, in order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-dytso22a, title = { A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence }, author = {Dytso, Alex R. and Goldenbaum, Mario and Vincent Poor, H. and Shamai, Shlomo}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8080--8094}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/dytso22a/dytso22a.pdf}, url = {https://proceedings.mlr.press/v151/dytso22a.html}, abstract = { A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution. The Bayesian estimator induced by a least favorable prior, under mild conditions, is then known to be minimax. However, finding least favorable distributions can be challenging due to inherent optimization over the space of probability distributions, which is infinite-dimensional. This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension. The benefit of this dimensionality reduction is that it permits the use of popular algorithms such as projected gradient ascent to find least favorable priors. Throughout the paper, in order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences. } }
Endnote
%0 Conference Paper %T A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence %A Alex R. Dytso %A Mario Goldenbaum %A H. Vincent Poor %A Shlomo Shamai %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-dytso22a %I PMLR %P 8080--8094 %U https://proceedings.mlr.press/v151/dytso22a.html %V 151 %X A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution. The Bayesian estimator induced by a least favorable prior, under mild conditions, is then known to be minimax. However, finding least favorable distributions can be challenging due to inherent optimization over the space of probability distributions, which is infinite-dimensional. This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension. The benefit of this dimensionality reduction is that it permits the use of popular algorithms such as projected gradient ascent to find least favorable priors. Throughout the paper, in order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
APA
Dytso, A.R., Goldenbaum, M., Vincent Poor, H. & Shamai, S.. (2022). A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8080-8094 Available from https://proceedings.mlr.press/v151/dytso22a.html.

Related Material