Learning Latent Causal Structures with a Redundant Input Neural Network

Jonathan D. Young, Bryan Andrews, Gregory F. Cooper, Xinghua Lu
Proceedings of the 2020 KDD Workshop on Causal Discovery, PMLR 127:62-91, 2020.

Abstract

Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v127-young20a, title = {Learning Latent Causal Structures with a Redundant Input Neural Network}, author = {Young, Jonathan D. and Andrews, Bryan and Cooper, Gregory F. and Lu, Xinghua}, booktitle = {Proceedings of the 2020 KDD Workshop on Causal Discovery}, pages = {62--91}, year = {2020}, editor = {}, volume = {127}, series = {Proceedings of Machine Learning Research}, month = {24 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v127/young20a/young20a.pdf}, url = {https://proceedings.mlr.press/v127/young20a.html}, abstract = {Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.} }
Endnote
%0 Conference Paper %T Learning Latent Causal Structures with a Redundant Input Neural Network %A Jonathan D. Young %A Bryan Andrews %A Gregory F. Cooper %A Xinghua Lu %B Proceedings of the 2020 KDD Workshop on Causal Discovery %C Proceedings of Machine Learning Research %D 2020 %E %F pmlr-v127-young20a %I PMLR %P 62--91 %U https://proceedings.mlr.press/v127/young20a.html %V 127 %X Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables.
APA
Young, J.D., Andrews, B., Cooper, G.F. & Lu, X.. (2020). Learning Latent Causal Structures with a Redundant Input Neural Network. Proceedings of the 2020 KDD Workshop on Causal Discovery, in Proceedings of Machine Learning Research 127:62-91 Available from https://proceedings.mlr.press/v127/young20a.html.

Related Material