StrWAEs to Invariant Representations

Hyunjong Lee, Yedarm Seong, Sungdong Lee, Joong-Ho Won
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:26902-26928, 2024.

Abstract

Autoencoders have become an indispensable tool for generative modeling and representation learning in high dimensions. Imposing structural constraints such as conditional independence in order to capture invariance of latent variables to nuisance information has been attempted through adding ad hoc penalties to the loss function mostly in the variational autoencoder (VAE) context, often based on heuristics. This paper demonstrates that Wasserstein autoencoders (WAEs) are highly flexible in embracing such structural constraints. Well-known extensions of VAEs for this purpose are gracefully handled within the framework of WAEs. In particular, given a conditional independence structure of the generative model (decoder), corresponding encoder structure and penalties are derived from the functional constraints that define the WAE. These structural uses of WAEs, termed StrWAEs (“stairways”), open up a principled way of penalizing autoencoders to impose structural constraints. Utilizing these advantages, we present handful of results on semi-supervised classification, conditional generation, and invariant representation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lee24u, title = {{S}tr{WAE}s to Invariant Representations}, author = {Lee, Hyunjong and Seong, Yedarm and Lee, Sungdong and Won, Joong-Ho}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {26902--26928}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24u/lee24u.pdf}, url = {https://proceedings.mlr.press/v235/lee24u.html}, abstract = {Autoencoders have become an indispensable tool for generative modeling and representation learning in high dimensions. Imposing structural constraints such as conditional independence in order to capture invariance of latent variables to nuisance information has been attempted through adding ad hoc penalties to the loss function mostly in the variational autoencoder (VAE) context, often based on heuristics. This paper demonstrates that Wasserstein autoencoders (WAEs) are highly flexible in embracing such structural constraints. Well-known extensions of VAEs for this purpose are gracefully handled within the framework of WAEs. In particular, given a conditional independence structure of the generative model (decoder), corresponding encoder structure and penalties are derived from the functional constraints that define the WAE. These structural uses of WAEs, termed StrWAEs (“stairways”), open up a principled way of penalizing autoencoders to impose structural constraints. Utilizing these advantages, we present handful of results on semi-supervised classification, conditional generation, and invariant representation tasks.} }
Endnote
%0 Conference Paper %T StrWAEs to Invariant Representations %A Hyunjong Lee %A Yedarm Seong %A Sungdong Lee %A Joong-Ho Won %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lee24u %I PMLR %P 26902--26928 %U https://proceedings.mlr.press/v235/lee24u.html %V 235 %X Autoencoders have become an indispensable tool for generative modeling and representation learning in high dimensions. Imposing structural constraints such as conditional independence in order to capture invariance of latent variables to nuisance information has been attempted through adding ad hoc penalties to the loss function mostly in the variational autoencoder (VAE) context, often based on heuristics. This paper demonstrates that Wasserstein autoencoders (WAEs) are highly flexible in embracing such structural constraints. Well-known extensions of VAEs for this purpose are gracefully handled within the framework of WAEs. In particular, given a conditional independence structure of the generative model (decoder), corresponding encoder structure and penalties are derived from the functional constraints that define the WAE. These structural uses of WAEs, termed StrWAEs (“stairways”), open up a principled way of penalizing autoencoders to impose structural constraints. Utilizing these advantages, we present handful of results on semi-supervised classification, conditional generation, and invariant representation tasks.
APA
Lee, H., Seong, Y., Lee, S. & Won, J.. (2024). StrWAEs to Invariant Representations. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:26902-26928 Available from https://proceedings.mlr.press/v235/lee24u.html.

Related Material