The LORACs Prior for VAEs: Letting the Trees Speak for the Data
[edit]
Proceedings of Machine Learning Research, PMLR 89:32923301, 2019.
Abstract
In variational autoencoders, the prior on the latent codes $z$ is often treated as an afterthought, but the prior shapes the kind of latent representation that the model learns. If the goal is to learn a representation that is interpretable and useful, then the prior should reflect the ways in which the highlevel factors that describe the data vary. The “default” prior is a standard normal, but if the natural factors of variation in the dataset exhibit discrete structure or are not independent, then the isotropicnormal prior will actually encourage learning representations that \emph{mask} this structure. To alleviate this problem, we propose using a flexible Bayesian nonparametric hierarchical clustering prior based on the timemarginalized coalescent (TMC). To scale learning to large datasets, we develop a new inducingpoint approximation and inference algorithm. We then apply the method without supervision to several datasets and examine the interpretability and practical performance of the inferred hierarchies and learned latent space.
Related Material


