The MeanField Approximation: Information Inequalities, Algorithms, and Complexity
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:13261347, 2018.
Abstract
The mean field approximation to the Ising model is a canonical variational tool that is used for analysis and inference in Ising models. We provide a simple and optimal bound for the KL error of the mean field approximation for Ising models on general graphs, and extend it to higher order Markov random fields. Our bound improves on previous bounds obtained in work in the graph limit literature by Borgs, Chayes, Lov{á}sz, S{ó}s, and Vesztergombi and recent works by Basak and Mukherjee, and Eldan. Our bound is tight up to lower order terms. Building on the methods used to prove the bound, along with techniques from combinatorics and optimization, we study the algorithmic problem of estimating the (variational) free energy for Ising models and general Markov random fields. For a graph $G$ on $n$ vertices and interaction matrix $J$ with Frobenius norm $\{J} \_F$, we provide algorithms that approximate the free energy within an additive error of $\epsilon n \J\_F$ in time $\exp(poly(1/\epsilon))$. We also show that approximation within $(n \J\_F)^{1\delta}$ is NPhard for every $\delta > 0$. Finally, we provide more efficient approximation algorithms, which find the optimal mean field approximation, for ferromagnetic Ising models and for Ising models satisfying Dobrushin’s condition.
Related Material


