Leveraging Time Irreversibility with Order-Contrastive Pre-training

Monica N. Agrawal, Hunter Lang, Michael Offin, Lior Gazit, David Sontag
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2330-2353, 2022.

Abstract

Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques. To overcome the difficulties posed by a lack of labeled data, we explore an "order-contrastive" method for self-supervised pre-training on longitudinal data. We sample pairs of time segments, switch the order for half of them, and train a model to predict whether a given pair is in the correct order. Intuitively, the ordering task allows the model to attend to the least time-reversible features (for example, features that indicate progression of a chronic disease). The same features are often useful for downstream tasks of interest. To quantify this, we study a simple theoretical setting where we prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training. Empirically, in synthetic and longitudinal healthcare settings, we demonstrate the effectiveness of order-contrastive pre-training in the small-data regime over supervised learning and other self-supervised pre-training baselines. Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-agrawal22a, title = { Leveraging Time Irreversibility with Order-Contrastive Pre-training }, author = {Agrawal, Monica N. and Lang, Hunter and Offin, Michael and Gazit, Lior and Sontag, David}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2330--2353}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/agrawal22a/agrawal22a.pdf}, url = {https://proceedings.mlr.press/v151/agrawal22a.html}, abstract = { Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques. To overcome the difficulties posed by a lack of labeled data, we explore an "order-contrastive" method for self-supervised pre-training on longitudinal data. We sample pairs of time segments, switch the order for half of them, and train a model to predict whether a given pair is in the correct order. Intuitively, the ordering task allows the model to attend to the least time-reversible features (for example, features that indicate progression of a chronic disease). The same features are often useful for downstream tasks of interest. To quantify this, we study a simple theoretical setting where we prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training. Empirically, in synthetic and longitudinal healthcare settings, we demonstrate the effectiveness of order-contrastive pre-training in the small-data regime over supervised learning and other self-supervised pre-training baselines. Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning. } }
Endnote
%0 Conference Paper %T Leveraging Time Irreversibility with Order-Contrastive Pre-training %A Monica N. Agrawal %A Hunter Lang %A Michael Offin %A Lior Gazit %A David Sontag %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-agrawal22a %I PMLR %P 2330--2353 %U https://proceedings.mlr.press/v151/agrawal22a.html %V 151 %X Label-scarce, high-dimensional domains such as healthcare present a challenge for modern machine learning techniques. To overcome the difficulties posed by a lack of labeled data, we explore an "order-contrastive" method for self-supervised pre-training on longitudinal data. We sample pairs of time segments, switch the order for half of them, and train a model to predict whether a given pair is in the correct order. Intuitively, the ordering task allows the model to attend to the least time-reversible features (for example, features that indicate progression of a chronic disease). The same features are often useful for downstream tasks of interest. To quantify this, we study a simple theoretical setting where we prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training. Empirically, in synthetic and longitudinal healthcare settings, we demonstrate the effectiveness of order-contrastive pre-training in the small-data regime over supervised learning and other self-supervised pre-training baselines. Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning.
APA
Agrawal, M.N., Lang, H., Offin, M., Gazit, L. & Sontag, D.. (2022). Leveraging Time Irreversibility with Order-Contrastive Pre-training . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2330-2353 Available from https://proceedings.mlr.press/v151/agrawal22a.html.

Related Material