[edit]
Can AI Help Reduce Human Bias? Insights from Police Rearrest Predictions
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:499-504, 2025.
Abstract
This short paper introduces the findings of Lee (2025) that examines the racial implications of police interaction with predictive algorithms, particularly in the context of racial disparities in rearrest predictions in the United States. He conducted an experiment where police officers were shown the profiles of young offenders and were asked to predict each offender’s rearrest probability within three years, both before and after being shown the algorithm’s prediction. The experiment varied the visibility of the offender’s race to the officers and also experimented with informing the officers of the model’s accuracy. Lee (2025) finds that when the race of the offender is disclosed, officers tend to adjust their predictions towards the algorithm’s assessment. However, the adjustments made by the officers showed significant racial disparities: there was a noticeable gap in initial rearrest predictions between Black and White offenders, even when controlling for the characteristics of the offenders. The police tended to predict higher rearrest rates for Black offenders only when race was visible, but reduced their predictions after seeing the algorithm’s assessment. However, not all police officers reduced their predictions after seeing the algorithm’s predictions. Only Black police officers made significant downward adjustments following the algorithm’s prediction, while White police officers did not significantly alter their assessments.