Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

Hao Wang, Berk Ustun, Flavio Calmon
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6618-6627, 2019.

Abstract

When the performance of a machine learning model varies over groups defined by sensitive attributes (e.g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group. In this paper, we exploit this fact to reduce the disparate impact of a fixed classification model over a population of interest. Given a black-box classifier, we aim to eliminate the performance gap by perturbing the distribution of input variables for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for common fairness criteria. We introduce a descent algorithm to learn a counterfactual distribution from data. We then discuss how the estimated distribution can be used to build a data preprocessor that can reduce disparate impact without training a new model. We validate our approach through experiments on real-world datasets, showing that it can repair different forms of disparity without a significant drop in accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-wang19l, title = {Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions}, author = {Wang, Hao and Ustun, Berk and Calmon, Flavio}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6618--6627}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/wang19l/wang19l.pdf}, url = {https://proceedings.mlr.press/v97/wang19l.html}, abstract = {When the performance of a machine learning model varies over groups defined by sensitive attributes (e.g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group. In this paper, we exploit this fact to reduce the disparate impact of a fixed classification model over a population of interest. Given a black-box classifier, we aim to eliminate the performance gap by perturbing the distribution of input variables for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for common fairness criteria. We introduce a descent algorithm to learn a counterfactual distribution from data. We then discuss how the estimated distribution can be used to build a data preprocessor that can reduce disparate impact without training a new model. We validate our approach through experiments on real-world datasets, showing that it can repair different forms of disparity without a significant drop in accuracy.} }
Endnote
%0 Conference Paper %T Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions %A Hao Wang %A Berk Ustun %A Flavio Calmon %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-wang19l %I PMLR %P 6618--6627 %U https://proceedings.mlr.press/v97/wang19l.html %V 97 %X When the performance of a machine learning model varies over groups defined by sensitive attributes (e.g., gender or ethnicity), the performance disparity can be expressed in terms of the probability distributions of the input and output variables over each group. In this paper, we exploit this fact to reduce the disparate impact of a fixed classification model over a population of interest. Given a black-box classifier, we aim to eliminate the performance gap by perturbing the distribution of input variables for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for common fairness criteria. We introduce a descent algorithm to learn a counterfactual distribution from data. We then discuss how the estimated distribution can be used to build a data preprocessor that can reduce disparate impact without training a new model. We validate our approach through experiments on real-world datasets, showing that it can repair different forms of disparity without a significant drop in accuracy.
APA
Wang, H., Ustun, B. & Calmon, F.. (2019). Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6618-6627 Available from https://proceedings.mlr.press/v97/wang19l.html.

Related Material