An Anytime Algorithm for Causal Inference

Peter Spirtes
Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics, PMLR R3:278-285, 2001.

Abstract

The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers "Can’t tell" for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR3-spirtes01a, title = {An Anytime Algorithm for Causal Inference}, author = {Spirtes, Peter}, booktitle = {Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics}, pages = {278--285}, year = {2001}, editor = {Richardson, Thomas S. and Jaakkola, Tommi S.}, volume = {R3}, series = {Proceedings of Machine Learning Research}, month = {04--07 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r3/spirtes01a/spirtes01a.pdf}, url = {https://proceedings.mlr.press/r3/spirtes01a.html}, abstract = {The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers "Can’t tell" for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted.}, note = {Reissued by PMLR on 31 March 2021.} }
Endnote
%0 Conference Paper %T An Anytime Algorithm for Causal Inference %A Peter Spirtes %B Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2001 %E Thomas S. Richardson %E Tommi S. Jaakkola %F pmlr-vR3-spirtes01a %I PMLR %P 278--285 %U https://proceedings.mlr.press/r3/spirtes01a.html %V R3 %X The Fast Casual Inference (FCI) algorithm searches for features common to observationally equivalent sets of causal directed acyclic graphs. It is correct in the large sample limit with probability one even if there is a possibility of hidden variables and selection bias. In the worst case, the number of conditional independence tests performed by the algorithm grows exponentially with the number of variables in the data set. This affects both the speed of the algorithm and the accuracy of the algorithm on small samples, because tests of independence conditional on large numbers of variables have very low power. In this paper, I prove that the FCI algorithm can be interrupted at any stage and asked for output. The output from the interrupted algorithm is still correct with probability one in the large sample limit, although possibly less informative (in the sense that it answers "Can’t tell" for a larger number of questions) than if the FCI algorithm had been allowed to continue uninterrupted. %Z Reissued by PMLR on 31 March 2021.
APA
Spirtes, P.. (2001). An Anytime Algorithm for Causal Inference. Proceedings of the Eighth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R3:278-285 Available from https://proceedings.mlr.press/r3/spirtes01a.html. Reissued by PMLR on 31 March 2021.

Related Material