The Benefits of Learning with Strongly Convex Approximate Inference

Ben London, Bert Huang, Lise Getoor
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:410-418, 2015.

Abstract

We explore the benefits of strongly convex free energies in variational inference, providing both theoretical motivation and a new meta-algorithm. Using the duality between strong convexity and stability, we prove a high-probability bound on the error of learned marginals that is inversely proportional to the modulus of convexity of the free energy, thereby motivating free energies whose moduli are constant with respect to the size of the graph. We identify sufficient conditions for Ω(1)-strong convexity in two popular variational techniques: tree-reweighted and counting number entropies. Our insights for the latter suggest a novel counting number optimization framework, which guarantees strong convexity for any given modulus. Our experiments demonstrate that learning with a strongly convex free energy, using our optimization framework to guarantee a given modulus, results in substantially more accurate marginal probabilities, thereby validating our theoretical claims and the effectiveness of our framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-london15, title = {The Benefits of Learning with Strongly Convex Approximate Inference}, author = {London, Ben and Huang, Bert and Getoor, Lise}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {410--418}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/london15.pdf}, url = {https://proceedings.mlr.press/v37/london15.html}, abstract = {We explore the benefits of strongly convex free energies in variational inference, providing both theoretical motivation and a new meta-algorithm. Using the duality between strong convexity and stability, we prove a high-probability bound on the error of learned marginals that is inversely proportional to the modulus of convexity of the free energy, thereby motivating free energies whose moduli are constant with respect to the size of the graph. We identify sufficient conditions for Ω(1)-strong convexity in two popular variational techniques: tree-reweighted and counting number entropies. Our insights for the latter suggest a novel counting number optimization framework, which guarantees strong convexity for any given modulus. Our experiments demonstrate that learning with a strongly convex free energy, using our optimization framework to guarantee a given modulus, results in substantially more accurate marginal probabilities, thereby validating our theoretical claims and the effectiveness of our framework.} }
Endnote
%0 Conference Paper %T The Benefits of Learning with Strongly Convex Approximate Inference %A Ben London %A Bert Huang %A Lise Getoor %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-london15 %I PMLR %P 410--418 %U https://proceedings.mlr.press/v37/london15.html %V 37 %X We explore the benefits of strongly convex free energies in variational inference, providing both theoretical motivation and a new meta-algorithm. Using the duality between strong convexity and stability, we prove a high-probability bound on the error of learned marginals that is inversely proportional to the modulus of convexity of the free energy, thereby motivating free energies whose moduli are constant with respect to the size of the graph. We identify sufficient conditions for Ω(1)-strong convexity in two popular variational techniques: tree-reweighted and counting number entropies. Our insights for the latter suggest a novel counting number optimization framework, which guarantees strong convexity for any given modulus. Our experiments demonstrate that learning with a strongly convex free energy, using our optimization framework to guarantee a given modulus, results in substantially more accurate marginal probabilities, thereby validating our theoretical claims and the effectiveness of our framework.
RIS
TY - CPAPER TI - The Benefits of Learning with Strongly Convex Approximate Inference AU - Ben London AU - Bert Huang AU - Lise Getoor BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-london15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 410 EP - 418 L1 - http://proceedings.mlr.press/v37/london15.pdf UR - https://proceedings.mlr.press/v37/london15.html AB - We explore the benefits of strongly convex free energies in variational inference, providing both theoretical motivation and a new meta-algorithm. Using the duality between strong convexity and stability, we prove a high-probability bound on the error of learned marginals that is inversely proportional to the modulus of convexity of the free energy, thereby motivating free energies whose moduli are constant with respect to the size of the graph. We identify sufficient conditions for Ω(1)-strong convexity in two popular variational techniques: tree-reweighted and counting number entropies. Our insights for the latter suggest a novel counting number optimization framework, which guarantees strong convexity for any given modulus. Our experiments demonstrate that learning with a strongly convex free energy, using our optimization framework to guarantee a given modulus, results in substantially more accurate marginal probabilities, thereby validating our theoretical claims and the effectiveness of our framework. ER -
APA
London, B., Huang, B. & Getoor, L.. (2015). The Benefits of Learning with Strongly Convex Approximate Inference. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:410-418 Available from https://proceedings.mlr.press/v37/london15.html.

Related Material