Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

Yangjun Ruan, Karen Ullrich, Daniel S Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris Maddison
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9136-9147, 2021.

Abstract

Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bitrate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ruan21a, title = {Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding}, author = {Ruan, Yangjun and Ullrich, Karen and Severo, Daniel S and Townsend, James and Khisti, Ashish and Doucet, Arnaud and Makhzani, Alireza and Maddison, Chris}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9136--9147}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ruan21a/ruan21a.pdf}, url = {https://proceedings.mlr.press/v139/ruan21a.html}, abstract = {Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bitrate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.} }
Endnote
%0 Conference Paper %T Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding %A Yangjun Ruan %A Karen Ullrich %A Daniel S Severo %A James Townsend %A Ashish Khisti %A Arnaud Doucet %A Alireza Makhzani %A Chris Maddison %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ruan21a %I PMLR %P 9136--9147 %U https://proceedings.mlr.press/v139/ruan21a.html %V 139 %X Latent variable models have been successfully applied in lossless compression with the bits-back coding algorithm. However, bits-back suffers from an increase in the bitrate equal to the KL divergence between the approximate posterior and the true posterior. In this paper, we show how to remove this gap asymptotically by deriving bits-back coding algorithms from tighter variational bounds. The key idea is to exploit extended space representations of Monte Carlo estimators of the marginal likelihood. Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space. When parallel architectures can be exploited, our coders can achieve better rates than bits-back with little additional cost. We demonstrate improved lossless compression rates in a variety of settings, especially in out-of-distribution or sequential data compression.
APA
Ruan, Y., Ullrich, K., Severo, D.S., Townsend, J., Khisti, A., Doucet, A., Makhzani, A. & Maddison, C.. (2021). Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9136-9147 Available from https://proceedings.mlr.press/v139/ruan21a.html.

Related Material