Learning from Irregularly-Sampled Time Series: A Missing Data Perspective

Steven Cheng-Xian Li, Benjamin Marlin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5937-5946, 2020.

Abstract

Irregularly-sampled time series occur in many domains including healthcare. They can be challenging to model because they do not naturally yield a fixed-dimensional representation as required by many standard machine learning models. In this paper, we consider irregular sampling from the perspective of missing data. We model observed irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function. We introduce an encoder-decoder framework for learning from such generic indexed sequences. We propose learning methods for this framework based on variational autoencoders and generative adversarial networks. For continuous irregularly-sampled time series, we introduce continuous convolutional layers that can efficiently interface with existing neural network architectures. Experiments show that our models are able to achieve competitive or better classification results on irregularly-sampled multivariate time series compared to recent RNN models while offering significantly faster training times.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-li20k, title = {Learning from Irregularly-Sampled Time Series: A Missing Data Perspective}, author = {Li, Steven Cheng-Xian and Marlin, Benjamin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5937--5946}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/li20k/li20k.pdf}, url = { http://proceedings.mlr.press/v119/li20k.html }, abstract = {Irregularly-sampled time series occur in many domains including healthcare. They can be challenging to model because they do not naturally yield a fixed-dimensional representation as required by many standard machine learning models. In this paper, we consider irregular sampling from the perspective of missing data. We model observed irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function. We introduce an encoder-decoder framework for learning from such generic indexed sequences. We propose learning methods for this framework based on variational autoencoders and generative adversarial networks. For continuous irregularly-sampled time series, we introduce continuous convolutional layers that can efficiently interface with existing neural network architectures. Experiments show that our models are able to achieve competitive or better classification results on irregularly-sampled multivariate time series compared to recent RNN models while offering significantly faster training times.} }
Endnote
%0 Conference Paper %T Learning from Irregularly-Sampled Time Series: A Missing Data Perspective %A Steven Cheng-Xian Li %A Benjamin Marlin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-li20k %I PMLR %P 5937--5946 %U http://proceedings.mlr.press/v119/li20k.html %V 119 %X Irregularly-sampled time series occur in many domains including healthcare. They can be challenging to model because they do not naturally yield a fixed-dimensional representation as required by many standard machine learning models. In this paper, we consider irregular sampling from the perspective of missing data. We model observed irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function. We introduce an encoder-decoder framework for learning from such generic indexed sequences. We propose learning methods for this framework based on variational autoencoders and generative adversarial networks. For continuous irregularly-sampled time series, we introduce continuous convolutional layers that can efficiently interface with existing neural network architectures. Experiments show that our models are able to achieve competitive or better classification results on irregularly-sampled multivariate time series compared to recent RNN models while offering significantly faster training times.
APA
Li, S.C. & Marlin, B.. (2020). Learning from Irregularly-Sampled Time Series: A Missing Data Perspective. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5937-5946 Available from http://proceedings.mlr.press/v119/li20k.html .

Related Material