On Variational Bounds of Mutual Information

Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, George Tucker
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5171-5180, 2019.

Abstract

Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning, but bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks. However, the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of these new bounds for estimation and representation learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-poole19a, title = {On Variational Bounds of Mutual Information}, author = {Poole, Ben and Ozair, Sherjil and Van Den Oord, Aaron and Alemi, Alex and Tucker, George}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5171--5180}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/poole19a/poole19a.pdf}, url = {https://proceedings.mlr.press/v97/poole19a.html}, abstract = {Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning, but bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks. However, the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of these new bounds for estimation and representation learning.} }
Endnote
%0 Conference Paper %T On Variational Bounds of Mutual Information %A Ben Poole %A Sherjil Ozair %A Aaron Van Den Oord %A Alex Alemi %A George Tucker %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-poole19a %I PMLR %P 5171--5180 %U https://proceedings.mlr.press/v97/poole19a.html %V 97 %X Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning, but bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks. However, the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of these new bounds for estimation and representation learning.
APA
Poole, B., Ozair, S., Van Den Oord, A., Alemi, A. & Tucker, G.. (2019). On Variational Bounds of Mutual Information. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5171-5180 Available from https://proceedings.mlr.press/v97/poole19a.html.

Related Material