[edit]
Auditing ML Models for Individual Bias and Unfairness
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4552-4562, 2020.
Abstract
We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe’s COMPAS recidivism prediction instrument.