The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:29933035, 2019.
Abstract
We prove that, for a broad range of problems, maximumaposteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximumlikelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.
Related Material


