The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling

Christopher Tosh, Sanjoy Dasgupta
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2993-3035, 2019.

Abstract

We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-tosh19a, title = {The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling}, author = {Tosh, Christopher and Dasgupta, Sanjoy}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {2993--3035}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/tosh19a/tosh19a.pdf}, url = {https://proceedings.mlr.press/v99/tosh19a.html}, abstract = {We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.} }
Endnote
%0 Conference Paper %T The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling %A Christopher Tosh %A Sanjoy Dasgupta %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-tosh19a %I PMLR %P 2993--3035 %U https://proceedings.mlr.press/v99/tosh19a.html %V 99 %X We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.
APA
Tosh, C. & Dasgupta, S.. (2019). The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:2993-3035 Available from https://proceedings.mlr.press/v99/tosh19a.html.

Related Material