Auditing ML Models for Individual Bias and Unfairness

Songkai Xue, Mikhail Yurochkin, Yuekai Sun
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4552-4562, 2020.

Abstract

We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe’s COMPAS recidivism prediction instrument.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-xue20a, title = {Auditing ML Models for Individual Bias and Unfairness}, author = {Xue, Songkai and Yurochkin, Mikhail and Sun, Yuekai}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {4552--4562}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/xue20a/xue20a.pdf}, url = {https://proceedings.mlr.press/v108/xue20a.html}, abstract = {We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe’s COMPAS recidivism prediction instrument.} }
Endnote
%0 Conference Paper %T Auditing ML Models for Individual Bias and Unfairness %A Songkai Xue %A Mikhail Yurochkin %A Yuekai Sun %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-xue20a %I PMLR %P 4552--4562 %U https://proceedings.mlr.press/v108/xue20a.html %V 108 %X We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe’s COMPAS recidivism prediction instrument.
APA
Xue, S., Yurochkin, M. & Sun, Y.. (2020). Auditing ML Models for Individual Bias and Unfairness. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:4552-4562 Available from https://proceedings.mlr.press/v108/xue20a.html.

Related Material