Improved Variational Inference in Discrete VAEs using Error Correcting Codes

María Martínez-García, Grace Villacrés, David Mitchell, Pablo M. Olmos
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2973-3012, 2025.

Abstract

Despite advances in deep probabilistic models, learning discrete latent representations remains challenging. This work introduces a novel method to improve inference in discrete Variational Autoencoders by reframing the inference problem through a generative perspective. We conceptualize the model as a communication system, and propose to leverage Error-Correcting Codes (ECCs) to introduce redundancy in latent representations, allowing the variational posterior to produce more accurate estimates and reduce the variational gap. We present a proof-of-concept using a Discrete Variational Autoencoder with binary latent variables and low-complexity repetition codes, extending it to a hierarchical structure for disentangling global and local data features. Our approach significantly improves generation quality, data reconstruction, and uncertainty calibration, outperforming the uncoded models even when trained with tighter bounds such as the Importance Weighted Autoencoder objective. We also outline the properties that ECCs should possess to be effectively utilized for improved discrete variational inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-martinez-garcia25a, title = {Improved Variational Inference in Discrete VAEs using Error Correcting Codes}, author = {Mart\'{i}nez-Garc\'{i}a, Mar\'{i}a and Villacr\'{e}s, Grace and Mitchell, David and Olmos, Pablo M.}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2973--3012}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/martinez-garcia25a/martinez-garcia25a.pdf}, url = {https://proceedings.mlr.press/v286/martinez-garcia25a.html}, abstract = {Despite advances in deep probabilistic models, learning discrete latent representations remains challenging. This work introduces a novel method to improve inference in discrete Variational Autoencoders by reframing the inference problem through a generative perspective. We conceptualize the model as a communication system, and propose to leverage Error-Correcting Codes (ECCs) to introduce redundancy in latent representations, allowing the variational posterior to produce more accurate estimates and reduce the variational gap. We present a proof-of-concept using a Discrete Variational Autoencoder with binary latent variables and low-complexity repetition codes, extending it to a hierarchical structure for disentangling global and local data features. Our approach significantly improves generation quality, data reconstruction, and uncertainty calibration, outperforming the uncoded models even when trained with tighter bounds such as the Importance Weighted Autoencoder objective. We also outline the properties that ECCs should possess to be effectively utilized for improved discrete variational inference.} }
Endnote
%0 Conference Paper %T Improved Variational Inference in Discrete VAEs using Error Correcting Codes %A María Martínez-García %A Grace Villacrés %A David Mitchell %A Pablo M. Olmos %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-martinez-garcia25a %I PMLR %P 2973--3012 %U https://proceedings.mlr.press/v286/martinez-garcia25a.html %V 286 %X Despite advances in deep probabilistic models, learning discrete latent representations remains challenging. This work introduces a novel method to improve inference in discrete Variational Autoencoders by reframing the inference problem through a generative perspective. We conceptualize the model as a communication system, and propose to leverage Error-Correcting Codes (ECCs) to introduce redundancy in latent representations, allowing the variational posterior to produce more accurate estimates and reduce the variational gap. We present a proof-of-concept using a Discrete Variational Autoencoder with binary latent variables and low-complexity repetition codes, extending it to a hierarchical structure for disentangling global and local data features. Our approach significantly improves generation quality, data reconstruction, and uncertainty calibration, outperforming the uncoded models even when trained with tighter bounds such as the Importance Weighted Autoencoder objective. We also outline the properties that ECCs should possess to be effectively utilized for improved discrete variational inference.
APA
Martínez-García, M., Villacrés, G., Mitchell, D. & Olmos, P.M.. (2025). Improved Variational Inference in Discrete VAEs using Error Correcting Codes. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2973-3012 Available from https://proceedings.mlr.press/v286/martinez-garcia25a.html.

Related Material