Can AI Help Reduce Human Bias? Insights from Police Rearrest Predictions

Yong Suk Lee
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:499-504, 2025.

Abstract

This short paper introduces the findings of Lee (2025) that examines the racial implications of police interaction with predictive algorithms, particularly in the context of racial disparities in rearrest predictions in the United States. He conducted an experiment where police officers were shown the profiles of young offenders and were asked to predict each offender’s rearrest probability within three years, both before and after being shown the algorithm’s prediction. The experiment varied the visibility of the offender’s race to the officers and also experimented with informing the officers of the model’s accuracy. Lee (2025) finds that when the race of the offender is disclosed, officers tend to adjust their predictions towards the algorithm’s assessment. However, the adjustments made by the officers showed significant racial disparities: there was a noticeable gap in initial rearrest predictions between Black and White offenders, even when controlling for the characteristics of the offenders. The police tended to predict higher rearrest rates for Black offenders only when race was visible, but reduced their predictions after seeing the algorithm’s assessment. However, not all police officers reduced their predictions after seeing the algorithm’s predictions. Only Black police officers made significant downward adjustments following the algorithm’s prediction, while White police officers did not significantly alter their assessments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-lee25a, title = {Can AI Help Reduce Human Bias? Insights from Police Rearrest Predictions}, author = {Lee, Yong Suk}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {499--504}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and Corrêa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/lee25a/lee25a.pdf}, url = {https://proceedings.mlr.press/v294/lee25a.html}, abstract = {This short paper introduces the findings of Lee (2025) that examines the racial implications of police interaction with predictive algorithms, particularly in the context of racial disparities in rearrest predictions in the United States. He conducted an experiment where police officers were shown the profiles of young offenders and were asked to predict each offender’s rearrest probability within three years, both before and after being shown the algorithm’s prediction. The experiment varied the visibility of the offender’s race to the officers and also experimented with informing the officers of the model’s accuracy. Lee (2025) finds that when the race of the offender is disclosed, officers tend to adjust their predictions towards the algorithm’s assessment. However, the adjustments made by the officers showed significant racial disparities: there was a noticeable gap in initial rearrest predictions between Black and White offenders, even when controlling for the characteristics of the offenders. The police tended to predict higher rearrest rates for Black offenders only when race was visible, but reduced their predictions after seeing the algorithm’s assessment. However, not all police officers reduced their predictions after seeing the algorithm’s predictions. Only Black police officers made significant downward adjustments following the algorithm’s prediction, while White police officers did not significantly alter their assessments.} }
Endnote
%0 Conference Paper %T Can AI Help Reduce Human Bias? Insights from Police Rearrest Predictions %A Yong Suk Lee %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria Corrêa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-lee25a %I PMLR %P 499--504 %U https://proceedings.mlr.press/v294/lee25a.html %V 294 %X This short paper introduces the findings of Lee (2025) that examines the racial implications of police interaction with predictive algorithms, particularly in the context of racial disparities in rearrest predictions in the United States. He conducted an experiment where police officers were shown the profiles of young offenders and were asked to predict each offender’s rearrest probability within three years, both before and after being shown the algorithm’s prediction. The experiment varied the visibility of the offender’s race to the officers and also experimented with informing the officers of the model’s accuracy. Lee (2025) finds that when the race of the offender is disclosed, officers tend to adjust their predictions towards the algorithm’s assessment. However, the adjustments made by the officers showed significant racial disparities: there was a noticeable gap in initial rearrest predictions between Black and White offenders, even when controlling for the characteristics of the offenders. The police tended to predict higher rearrest rates for Black offenders only when race was visible, but reduced their predictions after seeing the algorithm’s assessment. However, not all police officers reduced their predictions after seeing the algorithm’s predictions. Only Black police officers made significant downward adjustments following the algorithm’s prediction, while White police officers did not significantly alter their assessments.
APA
Lee, Y.S.. (2025). Can AI Help Reduce Human Bias? Insights from Police Rearrest Predictions. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:499-504 Available from https://proceedings.mlr.press/v294/lee25a.html.

Related Material