All of the Fairness for Edge Prediction with Optimal Transport

Charlotte Laclau, Ievgen Redko, Manvi Choudhary, Christine Largeron
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1774-1782, 2021.

Abstract

Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security. While being very efficient in their predictive abilities, the deployed algorithms sometimes tend to learn an inductive model with a discriminative bias due to the presence of this latter in the learning sample. This problem gave rise to a new field of algorithmic fairness where the goal is to correct the discriminative bias introduced by a certain attribute in order to decorrelate it from the model’s output. In this paper, we study the problem of fairness for the task of edge prediction in graphs, a largely underinvestigated scenario compared to a more popular setting of fair classification. To this end, we formulate the problem of fair edge prediction, analyze it theoretically, and propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness. We experimentally show the versatility of our approach and its capacity to provide explicit control over different notions of fairness and prediction accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-laclau21a, title = { All of the Fairness for Edge Prediction with Optimal Transport }, author = {Laclau, Charlotte and Redko, Ievgen and Choudhary, Manvi and Largeron, Christine}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1774--1782}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/laclau21a/laclau21a.pdf}, url = {https://proceedings.mlr.press/v130/laclau21a.html}, abstract = { Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security. While being very efficient in their predictive abilities, the deployed algorithms sometimes tend to learn an inductive model with a discriminative bias due to the presence of this latter in the learning sample. This problem gave rise to a new field of algorithmic fairness where the goal is to correct the discriminative bias introduced by a certain attribute in order to decorrelate it from the model’s output. In this paper, we study the problem of fairness for the task of edge prediction in graphs, a largely underinvestigated scenario compared to a more popular setting of fair classification. To this end, we formulate the problem of fair edge prediction, analyze it theoretically, and propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness. We experimentally show the versatility of our approach and its capacity to provide explicit control over different notions of fairness and prediction accuracy. } }
Endnote
%0 Conference Paper %T All of the Fairness for Edge Prediction with Optimal Transport %A Charlotte Laclau %A Ievgen Redko %A Manvi Choudhary %A Christine Largeron %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-laclau21a %I PMLR %P 1774--1782 %U https://proceedings.mlr.press/v130/laclau21a.html %V 130 %X Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security. While being very efficient in their predictive abilities, the deployed algorithms sometimes tend to learn an inductive model with a discriminative bias due to the presence of this latter in the learning sample. This problem gave rise to a new field of algorithmic fairness where the goal is to correct the discriminative bias introduced by a certain attribute in order to decorrelate it from the model’s output. In this paper, we study the problem of fairness for the task of edge prediction in graphs, a largely underinvestigated scenario compared to a more popular setting of fair classification. To this end, we formulate the problem of fair edge prediction, analyze it theoretically, and propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness. We experimentally show the versatility of our approach and its capacity to provide explicit control over different notions of fairness and prediction accuracy.
APA
Laclau, C., Redko, I., Choudhary, M. & Largeron, C.. (2021). All of the Fairness for Edge Prediction with Optimal Transport . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1774-1782 Available from https://proceedings.mlr.press/v130/laclau21a.html.

Related Material