Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation

George Alexandru Adam, Chun-Hao Kingsley Chang, Benjamin Haibe-Kains, Anna Goldenberg
; Proceedings of the 5th Machine Learning for Healthcare Conference, PMLR 126:710-731, 2020.

Abstract

There is much hope for the positive impact of machine learning on healthcare. In fact, several ML methods are already used in everyday clinical practice, but the effect of adopting imperfect predictions from an ML system on model performance over time is unknown. Clinicians changing their decisions based on an imperfect ML system changes the underlying probability distribution P(Y ) of future data, where Y is the outcome. This effect has not been carefully studied to date. In this work we tackle the problem of model predictions influencing future labels (which we refer to as the feedback loop) by considering several supervised learning scenarios, and show that unlike in the no-feedback-loop setting, if clinicians fully trust the model (100% adoption of the predicted label) the false positive rate (FPR) grows uncontrollably with the number of updates. We simulate the feedback loop problem on a real-world ICU data (MIMIC-IV v0.1) as the distribution shifts over time. Among our scenarios, we consider how the clinician’s trust in the model over time impacts the magnitude of the FPR increase due to a feedback loop. Finally, we propose mitigating solutions to the observed model degradation using heuristics that discard potentially incorrectly labeled samples. We hope that our work draws attention to the existence of the feedback-loop problem resulting in both theoretical and practical advances for ML in healthcare.

Cite this Paper


BibTeX
@InProceedings{pmlr-v126-adam20a, title = {Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation}, author = {Adam, George Alexandru and Chang, Chun-Hao Kingsley and Haibe-Kains, Benjamin and Goldenberg, Anna}, booktitle = {Proceedings of the 5th Machine Learning for Healthcare Conference}, pages = {710--731}, year = {2020}, editor = {Finale Doshi-Velez and Jim Fackler and Ken Jung and David Kale and Rajesh Ranganath and Byron Wallace and Jenna Wiens}, volume = {126}, series = {Proceedings of Machine Learning Research}, address = {Virtual}, month = {07--08 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v126/adam20a/adam20a.pdf}, url = {http://proceedings.mlr.press/v126/adam20a.html}, abstract = {There is much hope for the positive impact of machine learning on healthcare. In fact, several ML methods are already used in everyday clinical practice, but the effect of adopting imperfect predictions from an ML system on model performance over time is unknown. Clinicians changing their decisions based on an imperfect ML system changes the underlying probability distribution P(Y ) of future data, where Y is the outcome. This effect has not been carefully studied to date. In this work we tackle the problem of model predictions influencing future labels (which we refer to as the feedback loop) by considering several supervised learning scenarios, and show that unlike in the no-feedback-loop setting, if clinicians fully trust the model (100% adoption of the predicted label) the false positive rate (FPR) grows uncontrollably with the number of updates. We simulate the feedback loop problem on a real-world ICU data (MIMIC-IV v0.1) as the distribution shifts over time. Among our scenarios, we consider how the clinician’s trust in the model over time impacts the magnitude of the FPR increase due to a feedback loop. Finally, we propose mitigating solutions to the observed model degradation using heuristics that discard potentially incorrectly labeled samples. We hope that our work draws attention to the existence of the feedback-loop problem resulting in both theoretical and practical advances for ML in healthcare.} }
Endnote
%0 Conference Paper %T Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation %A George Alexandru Adam %A Chun-Hao Kingsley Chang %A Benjamin Haibe-Kains %A Anna Goldenberg %B Proceedings of the 5th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2020 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v126-adam20a %I PMLR %J Proceedings of Machine Learning Research %P 710--731 %U http://proceedings.mlr.press %V 126 %W PMLR %X There is much hope for the positive impact of machine learning on healthcare. In fact, several ML methods are already used in everyday clinical practice, but the effect of adopting imperfect predictions from an ML system on model performance over time is unknown. Clinicians changing their decisions based on an imperfect ML system changes the underlying probability distribution P(Y ) of future data, where Y is the outcome. This effect has not been carefully studied to date. In this work we tackle the problem of model predictions influencing future labels (which we refer to as the feedback loop) by considering several supervised learning scenarios, and show that unlike in the no-feedback-loop setting, if clinicians fully trust the model (100% adoption of the predicted label) the false positive rate (FPR) grows uncontrollably with the number of updates. We simulate the feedback loop problem on a real-world ICU data (MIMIC-IV v0.1) as the distribution shifts over time. Among our scenarios, we consider how the clinician’s trust in the model over time impacts the magnitude of the FPR increase due to a feedback loop. Finally, we propose mitigating solutions to the observed model degradation using heuristics that discard potentially incorrectly labeled samples. We hope that our work draws attention to the existence of the feedback-loop problem resulting in both theoretical and practical advances for ML in healthcare.
APA
Adam, G.A., Chang, C.K., Haibe-Kains, B. & Goldenberg, A.. (2020). Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation. Proceedings of the 5th Machine Learning for Healthcare Conference, in PMLR 126:710-731

Related Material