Counterfactual Reasoning for Fair Clinical Risk Prediction

Stephen R. Pfohl, Tony Duan, Daisy Yi Ding, Nigam H. Shah
; Proceedings of the 4th Machine Learning for Healthcare Conference, PMLR 106:325-358, 2019.

Abstract

The use of machine learning systems to support decision making in healthcare raises questions as to what extent these systems may introduce or exacerbate disparities in care for historically underrepresented and mistreated groups, due to biases implicitly embedded in observational data in electronic health records. To address this problem in the context of clinical risk prediction models, we develop an augmented counterfactual fairness criteria that extends the group fairness criteria of equalized odds. We do so by requiring that the same prediction be made for a patient, and a counterfactual patient resulting from changing a sensitive attribute, if the factual and counterfactual outcomes do not differ. We investigate the extent to which the augmented counterfactual fairness criteria may be applied to develop fair models for prolonged inpatient length of stay and mortality with observational electronic health records data. As the fairness criteria is ill-defined without knowledge of the data generating process, we use a variational autoencoder to perform counterfactual inference in the context of an assumed causal graph. While our technique provides a means to trade off maintenance of fairness with reduction in predictive performance in the context of a learned generative model, further work is needed to assess the generality of this approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v106-pfohl19a, title = {Counterfactual Reasoning for Fair Clinical Risk Prediction}, author = {Pfohl, Stephen R. and Duan, Tony and Ding, Daisy Yi and Shah, Nigam H.}, pages = {325--358}, year = {2019}, editor = {Finale Doshi-Velez and Jim Fackler and Ken Jung and David Kale and Rajesh Ranganath and Byron Wallace and Jenna Wiens}, volume = {106}, series = {Proceedings of Machine Learning Research}, address = {Ann Arbor, Michigan}, month = {09--10 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v106/pfohl19a/pfohl19a.pdf}, url = {http://proceedings.mlr.press/v106/pfohl19a.html}, abstract = {The use of machine learning systems to support decision making in healthcare raises questions as to what extent these systems may introduce or exacerbate disparities in care for historically underrepresented and mistreated groups, due to biases implicitly embedded in observational data in electronic health records. To address this problem in the context of clinical risk prediction models, we develop an augmented counterfactual fairness criteria that extends the group fairness criteria of equalized odds. We do so by requiring that the same prediction be made for a patient, and a counterfactual patient resulting from changing a sensitive attribute, if the factual and counterfactual outcomes do not differ. We investigate the extent to which the augmented counterfactual fairness criteria may be applied to develop fair models for prolonged inpatient length of stay and mortality with observational electronic health records data. As the fairness criteria is ill-defined without knowledge of the data generating process, we use a variational autoencoder to perform counterfactual inference in the context of an assumed causal graph. While our technique provides a means to trade off maintenance of fairness with reduction in predictive performance in the context of a learned generative model, further work is needed to assess the generality of this approach.} }
Endnote
%0 Conference Paper %T Counterfactual Reasoning for Fair Clinical Risk Prediction %A Stephen R. Pfohl %A Tony Duan %A Daisy Yi Ding %A Nigam H. Shah %B Proceedings of the 4th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2019 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v106-pfohl19a %I PMLR %J Proceedings of Machine Learning Research %P 325--358 %U http://proceedings.mlr.press %V 106 %W PMLR %X The use of machine learning systems to support decision making in healthcare raises questions as to what extent these systems may introduce or exacerbate disparities in care for historically underrepresented and mistreated groups, due to biases implicitly embedded in observational data in electronic health records. To address this problem in the context of clinical risk prediction models, we develop an augmented counterfactual fairness criteria that extends the group fairness criteria of equalized odds. We do so by requiring that the same prediction be made for a patient, and a counterfactual patient resulting from changing a sensitive attribute, if the factual and counterfactual outcomes do not differ. We investigate the extent to which the augmented counterfactual fairness criteria may be applied to develop fair models for prolonged inpatient length of stay and mortality with observational electronic health records data. As the fairness criteria is ill-defined without knowledge of the data generating process, we use a variational autoencoder to perform counterfactual inference in the context of an assumed causal graph. While our technique provides a means to trade off maintenance of fairness with reduction in predictive performance in the context of a learned generative model, further work is needed to assess the generality of this approach.
APA
Pfohl, S.R., Duan, T., Ding, D.Y. & Shah, N.H.. (2019). Counterfactual Reasoning for Fair Clinical Risk Prediction. Proceedings of the 4th Machine Learning for Healthcare Conference, in PMLR 106:325-358

Related Material