Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint

Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, Hisashi Kashima
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:145-153, 2021.

Abstract

Machine learning is used to make decisions for individuals in various fields, which require us to achieve good prediction accuracy while ensuring fairness with respect to sensitive features (e.g., race and gender). This problem, however, remains difficult in complex real-world scenarios. To quantify unfairness under such situations, existing methods utilize path-specific causal effects. However, none of them can ensure fairness for each individual without making impractical functional assumptions about the data. In this paper, we propose a far more practical framework for learning an individually fair classifier. To avoid restrictive functional assumptions, we define the probability of individual unfairness (PIU) and solve an optimization problem where PIU’s upper bound, which can be estimated from data, is controlled to be close to zero. We elucidate why our method can guarantee fairness for each individual. Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-chikahara21a, title = { Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint }, author = {Chikahara, Yoichi and Sakaue, Shinsaku and Fujino, Akinori and Kashima, Hisashi}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {145--153}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/chikahara21a/chikahara21a.pdf}, url = {https://proceedings.mlr.press/v130/chikahara21a.html}, abstract = { Machine learning is used to make decisions for individuals in various fields, which require us to achieve good prediction accuracy while ensuring fairness with respect to sensitive features (e.g., race and gender). This problem, however, remains difficult in complex real-world scenarios. To quantify unfairness under such situations, existing methods utilize path-specific causal effects. However, none of them can ensure fairness for each individual without making impractical functional assumptions about the data. In this paper, we propose a far more practical framework for learning an individually fair classifier. To avoid restrictive functional assumptions, we define the probability of individual unfairness (PIU) and solve an optimization problem where PIU’s upper bound, which can be estimated from data, is controlled to be close to zero. We elucidate why our method can guarantee fairness for each individual. Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy. } }
Endnote
%0 Conference Paper %T Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint %A Yoichi Chikahara %A Shinsaku Sakaue %A Akinori Fujino %A Hisashi Kashima %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-chikahara21a %I PMLR %P 145--153 %U https://proceedings.mlr.press/v130/chikahara21a.html %V 130 %X Machine learning is used to make decisions for individuals in various fields, which require us to achieve good prediction accuracy while ensuring fairness with respect to sensitive features (e.g., race and gender). This problem, however, remains difficult in complex real-world scenarios. To quantify unfairness under such situations, existing methods utilize path-specific causal effects. However, none of them can ensure fairness for each individual without making impractical functional assumptions about the data. In this paper, we propose a far more practical framework for learning an individually fair classifier. To avoid restrictive functional assumptions, we define the probability of individual unfairness (PIU) and solve an optimization problem where PIU’s upper bound, which can be estimated from data, is controlled to be close to zero. We elucidate why our method can guarantee fairness for each individual. Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
APA
Chikahara, Y., Sakaue, S., Fujino, A. & Kashima, H.. (2021). Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:145-153 Available from https://proceedings.mlr.press/v130/chikahara21a.html.

Related Material