[edit]
The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2993-3035, 2019.
Abstract
We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.