CRVI: Convex Relaxation for Variational Inference

Ghazal Fazelnia, John Paisley
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1477-1485, 2018.

Abstract

We present a new technique for solving non-convex variational inference optimization problems. Variational inference is a widely used method for posterior approximation in which the inference problem is transformed into an optimization problem. For most models, this optimization is highly non-convex and so hard to solve. In this paper, we introduce a new approach to solving the variational inference optimization based on convex relaxation and semidefinite programming. Our theoretical results guarantee very tight relaxation bounds that get nearer to the global optimal solution than traditional coordinate ascent. We evaluate the performance of our approach on regression and sparse coding.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-fazelnia18a, title = {{CRVI}: Convex Relaxation for Variational Inference}, author = {Fazelnia, Ghazal and Paisley, John}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1477--1485}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/fazelnia18a/fazelnia18a.pdf}, url = {https://proceedings.mlr.press/v80/fazelnia18a.html}, abstract = {We present a new technique for solving non-convex variational inference optimization problems. Variational inference is a widely used method for posterior approximation in which the inference problem is transformed into an optimization problem. For most models, this optimization is highly non-convex and so hard to solve. In this paper, we introduce a new approach to solving the variational inference optimization based on convex relaxation and semidefinite programming. Our theoretical results guarantee very tight relaxation bounds that get nearer to the global optimal solution than traditional coordinate ascent. We evaluate the performance of our approach on regression and sparse coding.} }
Endnote
%0 Conference Paper %T CRVI: Convex Relaxation for Variational Inference %A Ghazal Fazelnia %A John Paisley %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-fazelnia18a %I PMLR %P 1477--1485 %U https://proceedings.mlr.press/v80/fazelnia18a.html %V 80 %X We present a new technique for solving non-convex variational inference optimization problems. Variational inference is a widely used method for posterior approximation in which the inference problem is transformed into an optimization problem. For most models, this optimization is highly non-convex and so hard to solve. In this paper, we introduce a new approach to solving the variational inference optimization based on convex relaxation and semidefinite programming. Our theoretical results guarantee very tight relaxation bounds that get nearer to the global optimal solution than traditional coordinate ascent. We evaluate the performance of our approach on regression and sparse coding.
APA
Fazelnia, G. & Paisley, J.. (2018). CRVI: Convex Relaxation for Variational Inference. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1477-1485 Available from https://proceedings.mlr.press/v80/fazelnia18a.html.

Related Material