Causal Inference under Interference and Model Uncertainty

Chi Zhang, Karthika Mohan, Judea Pearl
Proceedings of the Second Conference on Causal Learning and Reasoning, PMLR 213:371-385, 2023.

Abstract

Algorithms that take data as input commonly assume that variables in the input dataset are Independent and Identically Distributed (IID). However, IID may be violated in many real world datasets that are generated by processes in which units/samples interact with one another. Typical examples include contagion that may be related to infectious diseases in public health, economic crisis in finance and risky behavior in social science. Handling non-IID data (without making additional assumptions) requires access to the true data generating process and the exact interaction patterns among units/samples, which may not be easily available. This work focuses on a specific type of interaction among samples, namely interference (i.e. some units’ treatments affect other units’ outcomes), in situations where there exists uncertainty regarding interaction patterns. The main contributions include modeling uncertain interaction using linear graphical causal models, quantifying bias when IID is incorrectly assumed, presenting a procedure to remove such bias and deriving bounds for average causal effects.

Cite this Paper


BibTeX
@InProceedings{pmlr-v213-zhang23a, title = {Causal Inference under Interference and Model Uncertainty}, author = {Zhang, Chi and Mohan, Karthika and Pearl, Judea}, booktitle = {Proceedings of the Second Conference on Causal Learning and Reasoning}, pages = {371--385}, year = {2023}, editor = {van der Schaar, Mihaela and Zhang, Cheng and Janzing, Dominik}, volume = {213}, series = {Proceedings of Machine Learning Research}, month = {11--14 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v213/zhang23a/zhang23a.pdf}, url = {https://proceedings.mlr.press/v213/zhang23a.html}, abstract = {Algorithms that take data as input commonly assume that variables in the input dataset are Independent and Identically Distributed (IID). However, IID may be violated in many real world datasets that are generated by processes in which units/samples interact with one another. Typical examples include contagion that may be related to infectious diseases in public health, economic crisis in finance and risky behavior in social science. Handling non-IID data (without making additional assumptions) requires access to the true data generating process and the exact interaction patterns among units/samples, which may not be easily available. This work focuses on a specific type of interaction among samples, namely interference (i.e. some units’ treatments affect other units’ outcomes), in situations where there exists uncertainty regarding interaction patterns. The main contributions include modeling uncertain interaction using linear graphical causal models, quantifying bias when IID is incorrectly assumed, presenting a procedure to remove such bias and deriving bounds for average causal effects.} }
Endnote
%0 Conference Paper %T Causal Inference under Interference and Model Uncertainty %A Chi Zhang %A Karthika Mohan %A Judea Pearl %B Proceedings of the Second Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2023 %E Mihaela van der Schaar %E Cheng Zhang %E Dominik Janzing %F pmlr-v213-zhang23a %I PMLR %P 371--385 %U https://proceedings.mlr.press/v213/zhang23a.html %V 213 %X Algorithms that take data as input commonly assume that variables in the input dataset are Independent and Identically Distributed (IID). However, IID may be violated in many real world datasets that are generated by processes in which units/samples interact with one another. Typical examples include contagion that may be related to infectious diseases in public health, economic crisis in finance and risky behavior in social science. Handling non-IID data (without making additional assumptions) requires access to the true data generating process and the exact interaction patterns among units/samples, which may not be easily available. This work focuses on a specific type of interaction among samples, namely interference (i.e. some units’ treatments affect other units’ outcomes), in situations where there exists uncertainty regarding interaction patterns. The main contributions include modeling uncertain interaction using linear graphical causal models, quantifying bias when IID is incorrectly assumed, presenting a procedure to remove such bias and deriving bounds for average causal effects.
APA
Zhang, C., Mohan, K. & Pearl, J.. (2023). Causal Inference under Interference and Model Uncertainty. Proceedings of the Second Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 213:371-385 Available from https://proceedings.mlr.press/v213/zhang23a.html.

Related Material