oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis

Samuel K. Ainsworth, Nicholas J. Foti, Adrian K. C. Lee, Emily B. Fox
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:119-128, 2018.

Abstract

Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-ainsworth18a, title = {oi-{VAE}: Output Interpretable {VAE}s for Nonlinear Group Factor Analysis}, author = {Ainsworth, Samuel K. and Foti, Nicholas J. and Lee, Adrian K. C. and Fox, Emily B.}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {119--128}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/ainsworth18a/ainsworth18a.pdf}, url = {https://proceedings.mlr.press/v80/ainsworth18a.html}, abstract = {Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.} }
Endnote
%0 Conference Paper %T oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis %A Samuel K. Ainsworth %A Nicholas J. Foti %A Adrian K. C. Lee %A Emily B. Fox %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-ainsworth18a %I PMLR %P 119--128 %U https://proceedings.mlr.press/v80/ainsworth18a.html %V 80 %X Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.
APA
Ainsworth, S.K., Foti, N.J., Lee, A.K.C. & Fox, E.B.. (2018). oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:119-128 Available from https://proceedings.mlr.press/v80/ainsworth18a.html.

Related Material