Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event Data

Tamara Fernandez, Nicolas Rivera, Wenkai Xu, Arthur Gretton
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3112-3122, 2020.

Abstract

Survival Analysis and Reliability Theory are concerned with the analysis of time-to-event data, in which observations correspond to waiting times until an event of interest such as death from a particular disease or failure of a component in a mechanical system. This type of data is unique due to the presence of censoring, a type of missing data that occurs when we do not observe the actual time of the event of interest but, instead, we have access to an approximation for it given by random interval in which the observation is known to belong. Most traditional methods are not designed to deal with censoring, and thus we need to adapt them to censored time-to-event data. In this paper, we focus on non-parametric goodness-of-fit testing procedures based on combining the Stein’s method and kernelized discrepancies. While for uncensored data, there is a natural way of implementing a kernelized Stein discrepancy test, for censored data there are several options, each of them with different advantages and disadvantages. In this paper, we propose a collection of kernelized Stein discrepancy tests for time-to-event data, and we study each of them theoretically and empirically; our experimental results show that our proposed methods perform better than existing tests, including previous tests based on a kernelized maximum mean discrepancy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-fernandez20a, title = {Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event Data}, author = {Fernandez, Tamara and Rivera, Nicolas and Xu, Wenkai and Gretton, Arthur}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3112--3122}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/fernandez20a/fernandez20a.pdf}, url = {https://proceedings.mlr.press/v119/fernandez20a.html}, abstract = {Survival Analysis and Reliability Theory are concerned with the analysis of time-to-event data, in which observations correspond to waiting times until an event of interest such as death from a particular disease or failure of a component in a mechanical system. This type of data is unique due to the presence of censoring, a type of missing data that occurs when we do not observe the actual time of the event of interest but, instead, we have access to an approximation for it given by random interval in which the observation is known to belong. Most traditional methods are not designed to deal with censoring, and thus we need to adapt them to censored time-to-event data. In this paper, we focus on non-parametric goodness-of-fit testing procedures based on combining the Stein’s method and kernelized discrepancies. While for uncensored data, there is a natural way of implementing a kernelized Stein discrepancy test, for censored data there are several options, each of them with different advantages and disadvantages. In this paper, we propose a collection of kernelized Stein discrepancy tests for time-to-event data, and we study each of them theoretically and empirically; our experimental results show that our proposed methods perform better than existing tests, including previous tests based on a kernelized maximum mean discrepancy.} }
Endnote
%0 Conference Paper %T Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event Data %A Tamara Fernandez %A Nicolas Rivera %A Wenkai Xu %A Arthur Gretton %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-fernandez20a %I PMLR %P 3112--3122 %U https://proceedings.mlr.press/v119/fernandez20a.html %V 119 %X Survival Analysis and Reliability Theory are concerned with the analysis of time-to-event data, in which observations correspond to waiting times until an event of interest such as death from a particular disease or failure of a component in a mechanical system. This type of data is unique due to the presence of censoring, a type of missing data that occurs when we do not observe the actual time of the event of interest but, instead, we have access to an approximation for it given by random interval in which the observation is known to belong. Most traditional methods are not designed to deal with censoring, and thus we need to adapt them to censored time-to-event data. In this paper, we focus on non-parametric goodness-of-fit testing procedures based on combining the Stein’s method and kernelized discrepancies. While for uncensored data, there is a natural way of implementing a kernelized Stein discrepancy test, for censored data there are several options, each of them with different advantages and disadvantages. In this paper, we propose a collection of kernelized Stein discrepancy tests for time-to-event data, and we study each of them theoretically and empirically; our experimental results show that our proposed methods perform better than existing tests, including previous tests based on a kernelized maximum mean discrepancy.
APA
Fernandez, T., Rivera, N., Xu, W. & Gretton, A.. (2020). Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event Data. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3112-3122 Available from https://proceedings.mlr.press/v119/fernandez20a.html.

Related Material