Longitudinal Variational Autoencoder

Siddharth Ramchandran, Gleb Tikhonov, Kalle Kujanpää, Miika Koskinen, Harri Lähdesmäki
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3898-3906, 2021.

Abstract

Longitudinal datasets measured repeatedly over time from individual subjects, arise in many biomedical, psychological, social, and other studies. A common approach to analyse high-dimensional data that contains missing values is to learn a low-dimensional representation using variational autoencoders (VAEs). However, standard VAEs assume that the learnt representations are i.i.d., and fail to capture the correlations between the data samples. We propose the Longitudinal VAE (L-VAE), that uses a multi-output additive Gaussian process (GP) prior to extend the VAE’s capability to learn structured low-dimensional representations imposed by auxiliary covariate information, and derive a new KL divergence upper bound for such GPs. Our approach can simultaneously accommodate both time-varying shared and random effects, produce structured low-dimensional representations, disentangle effects of individual covariates or their interactions, and achieve highly accurate predictive performance. We compare our model against previous methods on synthetic as well as clinical datasets, and demonstrate the state-of-the-art performance in data imputation, reconstruction, and long-term prediction tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-ramchandran21b, title = { Longitudinal Variational Autoencoder }, author = {Ramchandran, Siddharth and Tikhonov, Gleb and Kujanp{\"a}{\"a}, Kalle and Koskinen, Miika and L{\"a}hdesm{\"a}ki, Harri}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3898--3906}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/ramchandran21b/ramchandran21b.pdf}, url = {https://proceedings.mlr.press/v130/ramchandran21b.html}, abstract = { Longitudinal datasets measured repeatedly over time from individual subjects, arise in many biomedical, psychological, social, and other studies. A common approach to analyse high-dimensional data that contains missing values is to learn a low-dimensional representation using variational autoencoders (VAEs). However, standard VAEs assume that the learnt representations are i.i.d., and fail to capture the correlations between the data samples. We propose the Longitudinal VAE (L-VAE), that uses a multi-output additive Gaussian process (GP) prior to extend the VAE’s capability to learn structured low-dimensional representations imposed by auxiliary covariate information, and derive a new KL divergence upper bound for such GPs. Our approach can simultaneously accommodate both time-varying shared and random effects, produce structured low-dimensional representations, disentangle effects of individual covariates or their interactions, and achieve highly accurate predictive performance. We compare our model against previous methods on synthetic as well as clinical datasets, and demonstrate the state-of-the-art performance in data imputation, reconstruction, and long-term prediction tasks. } }
Endnote
%0 Conference Paper %T Longitudinal Variational Autoencoder %A Siddharth Ramchandran %A Gleb Tikhonov %A Kalle Kujanpää %A Miika Koskinen %A Harri Lähdesmäki %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-ramchandran21b %I PMLR %P 3898--3906 %U https://proceedings.mlr.press/v130/ramchandran21b.html %V 130 %X Longitudinal datasets measured repeatedly over time from individual subjects, arise in many biomedical, psychological, social, and other studies. A common approach to analyse high-dimensional data that contains missing values is to learn a low-dimensional representation using variational autoencoders (VAEs). However, standard VAEs assume that the learnt representations are i.i.d., and fail to capture the correlations between the data samples. We propose the Longitudinal VAE (L-VAE), that uses a multi-output additive Gaussian process (GP) prior to extend the VAE’s capability to learn structured low-dimensional representations imposed by auxiliary covariate information, and derive a new KL divergence upper bound for such GPs. Our approach can simultaneously accommodate both time-varying shared and random effects, produce structured low-dimensional representations, disentangle effects of individual covariates or their interactions, and achieve highly accurate predictive performance. We compare our model against previous methods on synthetic as well as clinical datasets, and demonstrate the state-of-the-art performance in data imputation, reconstruction, and long-term prediction tasks.
APA
Ramchandran, S., Tikhonov, G., Kujanpää, K., Koskinen, M. & Lähdesmäki, H.. (2021). Longitudinal Variational Autoencoder . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3898-3906 Available from https://proceedings.mlr.press/v130/ramchandran21b.html.

Related Material