Fairness-Aware Learning for Continuous Attributes and Treatments

Jeremie Mary, Clément Calauzènes, Noureddine El Karoui
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4382-4391, 2019.

Abstract

We address the problem of algorithmic fairness: ensuring that the outcome of a classifier is not biased towards certain values of sensitive variables such as age, race or gender. As common fairness metrics can be expressed as measures of (conditional) independence between variables, we propose to use the Rényi maximum correlation coefficient to generalize fairness measurement to continuous variables. We exploit Witsenhausen’s characterization of the Rényi correlation coefficient to propose a differentiable implementation linked to $f$-divergences. This allows us to generalize fairness-aware learning to continuous variables by using a penalty that upper bounds this coefficient. Theses allows fairness to be extented to variables such as mixed ethnic groups or financial status without thresholds effects. This penalty can be estimated on mini-batches allowing to use deep nets. Experiments show favorable comparisons to state of the art on binary variables and prove the ability to protect continuous ones

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-mary19a, title = {Fairness-Aware Learning for Continuous Attributes and Treatments}, author = {Mary, Jeremie and Calauz{\`e}nes, Cl{\'e}ment and Karoui, Noureddine El}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4382--4391}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/mary19a/mary19a.pdf}, url = {https://proceedings.mlr.press/v97/mary19a.html}, abstract = {We address the problem of algorithmic fairness: ensuring that the outcome of a classifier is not biased towards certain values of sensitive variables such as age, race or gender. As common fairness metrics can be expressed as measures of (conditional) independence between variables, we propose to use the Rényi maximum correlation coefficient to generalize fairness measurement to continuous variables. We exploit Witsenhausen’s characterization of the Rényi correlation coefficient to propose a differentiable implementation linked to $f$-divergences. This allows us to generalize fairness-aware learning to continuous variables by using a penalty that upper bounds this coefficient. Theses allows fairness to be extented to variables such as mixed ethnic groups or financial status without thresholds effects. This penalty can be estimated on mini-batches allowing to use deep nets. Experiments show favorable comparisons to state of the art on binary variables and prove the ability to protect continuous ones} }
Endnote
%0 Conference Paper %T Fairness-Aware Learning for Continuous Attributes and Treatments %A Jeremie Mary %A Clément Calauzènes %A Noureddine El Karoui %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-mary19a %I PMLR %P 4382--4391 %U https://proceedings.mlr.press/v97/mary19a.html %V 97 %X We address the problem of algorithmic fairness: ensuring that the outcome of a classifier is not biased towards certain values of sensitive variables such as age, race or gender. As common fairness metrics can be expressed as measures of (conditional) independence between variables, we propose to use the Rényi maximum correlation coefficient to generalize fairness measurement to continuous variables. We exploit Witsenhausen’s characterization of the Rényi correlation coefficient to propose a differentiable implementation linked to $f$-divergences. This allows us to generalize fairness-aware learning to continuous variables by using a penalty that upper bounds this coefficient. Theses allows fairness to be extented to variables such as mixed ethnic groups or financial status without thresholds effects. This penalty can be estimated on mini-batches allowing to use deep nets. Experiments show favorable comparisons to state of the art on binary variables and prove the ability to protect continuous ones
APA
Mary, J., Calauzènes, C. & Karoui, N.E.. (2019). Fairness-Aware Learning for Continuous Attributes and Treatments. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4382-4391 Available from https://proceedings.mlr.press/v97/mary19a.html.

Related Material