Interpretable Representation Learning from Temporal Multi-view Data

Lin Qiu, Vernon M. Chinchilli, Lin Lin
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:864-879, 2023.

Abstract

In many scientific problems such as video surveillance, modern genomics, and finance, data are often collected from diverse measurements across time that exhibit time-dependent heterogeneous properties. Thus, it is important to not only integrate data from multiple sources (called multi-view data), but also to incorporate time dependency for deep understanding of the underlying system. We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data. This approach allows us to identify the disentangled latent embeddings across views while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-qiu23a, title = {Interpretable Representation Learning from Temporal Multi-view Data}, author = {Qiu, Lin and Chinchilli, Vernon M. and Lin, Lin}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {864--879}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/qiu23a/qiu23a.pdf}, url = {https://proceedings.mlr.press/v189/qiu23a.html}, abstract = {In many scientific problems such as video surveillance, modern genomics, and finance, data are often collected from diverse measurements across time that exhibit time-dependent heterogeneous properties. Thus, it is important to not only integrate data from multiple sources (called multi-view data), but also to incorporate time dependency for deep understanding of the underlying system. We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data. This approach allows us to identify the disentangled latent embeddings across views while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.} }
Endnote
%0 Conference Paper %T Interpretable Representation Learning from Temporal Multi-view Data %A Lin Qiu %A Vernon M. Chinchilli %A Lin Lin %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-qiu23a %I PMLR %P 864--879 %U https://proceedings.mlr.press/v189/qiu23a.html %V 189 %X In many scientific problems such as video surveillance, modern genomics, and finance, data are often collected from diverse measurements across time that exhibit time-dependent heterogeneous properties. Thus, it is important to not only integrate data from multiple sources (called multi-view data), but also to incorporate time dependency for deep understanding of the underlying system. We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data. This approach allows us to identify the disentangled latent embeddings across views while accounting for the time factor. We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.
APA
Qiu, L., Chinchilli, V.M. & Lin, L.. (2023). Interpretable Representation Learning from Temporal Multi-view Data. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:864-879 Available from https://proceedings.mlr.press/v189/qiu23a.html.

Related Material