Causal Discovery for Fairness

Rūta Binkytė, Karima Makhlouf, Carlos Pinzón, Sami Zhioua, Catuscia Palamidessi
Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, PMLR 214:7-22, 2023.

Abstract

Fairness guarantees that the ML decisions do not result in discrimination against individuals or minority groups. Identifying and measuring reliably fairness/discrimination is better achieved using causality which considers the causal relation, beyond mere association, between the sensitive attribute (e.g. gender, race, religion, etc.) and the decision (e.g. job hiring, loan granting, etc.). The big impediment to the use of causality to address fairness, however, is the unavailability of the causal model (typically represented as a causal graph). Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available. In this paper, we do not make such an assumption and we review the major algorithms to discover causal relations from observable data. This study focuses on causal discovery and its impact on fairness. In particular, we show how different causal discovery approaches may result in different causal models and, most importantly, how even slight differences between causal models can have significant impact on fairness/discrimination conclusions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v214-binkyte23a, title = {Causal Discovery for Fairness}, author = {Binkyt\.{e}, R\={u}ta and Makhlouf, Karima and Pinz\'{o}n, Carlos and Zhioua, Sami and Palamidessi, Catuscia}, booktitle = {Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy}, pages = {7--22}, year = {2023}, editor = {Dieng, Awa and Rateike, Miriam and Farnadi, Golnoosh and Fioretto, Ferdinando and Kusner, Matt and Schrouff, Jessica}, volume = {214}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v214/binkyte23a/binkyte23a.pdf}, url = {https://proceedings.mlr.press/v214/binkyte23a.html}, abstract = {Fairness guarantees that the ML decisions do not result in discrimination against individuals or minority groups. Identifying and measuring reliably fairness/discrimination is better achieved using causality which considers the causal relation, beyond mere association, between the sensitive attribute (e.g. gender, race, religion, etc.) and the decision (e.g. job hiring, loan granting, etc.). The big impediment to the use of causality to address fairness, however, is the unavailability of the causal model (typically represented as a causal graph). Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available. In this paper, we do not make such an assumption and we review the major algorithms to discover causal relations from observable data. This study focuses on causal discovery and its impact on fairness. In particular, we show how different causal discovery approaches may result in different causal models and, most importantly, how even slight differences between causal models can have significant impact on fairness/discrimination conclusions.} }
Endnote
%0 Conference Paper %T Causal Discovery for Fairness %A Rūta Binkytė %A Karima Makhlouf %A Carlos Pinzón %A Sami Zhioua %A Catuscia Palamidessi %B Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy %C Proceedings of Machine Learning Research %D 2023 %E Awa Dieng %E Miriam Rateike %E Golnoosh Farnadi %E Ferdinando Fioretto %E Matt Kusner %E Jessica Schrouff %F pmlr-v214-binkyte23a %I PMLR %P 7--22 %U https://proceedings.mlr.press/v214/binkyte23a.html %V 214 %X Fairness guarantees that the ML decisions do not result in discrimination against individuals or minority groups. Identifying and measuring reliably fairness/discrimination is better achieved using causality which considers the causal relation, beyond mere association, between the sensitive attribute (e.g. gender, race, religion, etc.) and the decision (e.g. job hiring, loan granting, etc.). The big impediment to the use of causality to address fairness, however, is the unavailability of the causal model (typically represented as a causal graph). Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available. In this paper, we do not make such an assumption and we review the major algorithms to discover causal relations from observable data. This study focuses on causal discovery and its impact on fairness. In particular, we show how different causal discovery approaches may result in different causal models and, most importantly, how even slight differences between causal models can have significant impact on fairness/discrimination conclusions.
APA
Binkytė, R., Makhlouf, K., Pinzón, C., Zhioua, S. & Palamidessi, C.. (2023). Causal Discovery for Fairness. Proceedings of the Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, in Proceedings of Machine Learning Research 214:7-22 Available from https://proceedings.mlr.press/v214/binkyte23a.html.

Related Material