Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment

Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, Jonathan Zittrain
Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:62-76, 2018.

Abstract

Actuarial risk assessments are frequently touted as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, which seek to problematize the use of variables that act as “proxies” for protected classes, such as race and gender. However, these debates fail to address the core ethical issue at hand - that current risk assessments are ill-equipped to support ethical punishment and rehabilitation practices in the criminal justice system, because they offer only a limited insight into the underlying drivers of criminal behavior. In this paper, we examine the prevailing paradigms of fairness currently under debate and propose an alternative methodology for identifying the underlying social and structural factors that drive criminal behavior. We argue that the core ethical debate surrounding the use of regression in risk assessments is not one of bias or accuracy. Rather, it’s one of purpose. If machine learning is operationalized merely in the service of predicting future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, rather it should be used to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v81-barabas18a, title = {Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment}, author = {Barabas, Chelsea and Virza, Madars and Dinakar, Karthik and Ito, Joichi and Zittrain, Jonathan}, booktitle = {Proceedings of the 1st Conference on Fairness, Accountability and Transparency}, pages = {62--76}, year = {2018}, editor = {Friedler, Sorelle A. and Wilson, Christo}, volume = {81}, series = {Proceedings of Machine Learning Research}, month = {23--24 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v81/barabas18a/barabas18a.pdf}, url = {https://proceedings.mlr.press/v81/barabas18a.html}, abstract = {Actuarial risk assessments are frequently touted as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, which seek to problematize the use of variables that act as “proxies” for protected classes, such as race and gender. However, these debates fail to address the core ethical issue at hand - that current risk assessments are ill-equipped to support ethical punishment and rehabilitation practices in the criminal justice system, because they offer only a limited insight into the underlying drivers of criminal behavior. In this paper, we examine the prevailing paradigms of fairness currently under debate and propose an alternative methodology for identifying the underlying social and structural factors that drive criminal behavior. We argue that the core ethical debate surrounding the use of regression in risk assessments is not one of bias or accuracy. Rather, it’s one of purpose. If machine learning is operationalized merely in the service of predicting future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, rather it should be used to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.} }
Endnote
%0 Conference Paper %T Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment %A Chelsea Barabas %A Madars Virza %A Karthik Dinakar %A Joichi Ito %A Jonathan Zittrain %B Proceedings of the 1st Conference on Fairness, Accountability and Transparency %C Proceedings of Machine Learning Research %D 2018 %E Sorelle A. Friedler %E Christo Wilson %F pmlr-v81-barabas18a %I PMLR %P 62--76 %U https://proceedings.mlr.press/v81/barabas18a.html %V 81 %X Actuarial risk assessments are frequently touted as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, which seek to problematize the use of variables that act as “proxies” for protected classes, such as race and gender. However, these debates fail to address the core ethical issue at hand - that current risk assessments are ill-equipped to support ethical punishment and rehabilitation practices in the criminal justice system, because they offer only a limited insight into the underlying drivers of criminal behavior. In this paper, we examine the prevailing paradigms of fairness currently under debate and propose an alternative methodology for identifying the underlying social and structural factors that drive criminal behavior. We argue that the core ethical debate surrounding the use of regression in risk assessments is not one of bias or accuracy. Rather, it’s one of purpose. If machine learning is operationalized merely in the service of predicting future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, rather it should be used to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.
APA
Barabas, C., Virza, M., Dinakar, K., Ito, J. & Zittrain, J.. (2018). Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research 81:62-76 Available from https://proceedings.mlr.press/v81/barabas18a.html.

Related Material