Counterfacual Fairness for Graph Neural Networks with Limited and Privacy Protected Sensitive Attributes

Xuemin Wang, Lei Wang, Tianlong Gu, Xuguang Bao
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:719-734, 2025.

Abstract

Graph Neural Networks (GNNs) have shown outstanding performance in learning graph representations, which increases their application in high-risk areas. However, GNNs may inherit biases from the graph data and make unfair predictions towards the protected sub-groups. To eliminate bias, a natural idea is to achieve counterfactual fairness from a causal perspective. Concretely, counterfactual fairness requires sufficient sensitive attributes as guidance, which is infeasible in the real world. The reason is that users with various privacy preferences may selectively publish their sensitive attributes and only limited sensitive attributes can be collected. Besides, the users who publish sensitive attributes still face privacy risks. In this paper, we first consider the situation in which the sensitive attributes are limited and propose a framework called PCFGR (Partially observed sensitive Attributes in Counterfactual Fair Graph Representation Learning) to learn fair graph representation from limited sensitive attributes. The framework trains a sensitive attribute estimator, which is applied to provide sufficient and accurate sensitive attributes. With these sensitive attributes, it can generate counterfactuals and eliminate the bias efficiently. Secondly, we aim to protect the privacy of the sensitive attributes and further propose PCFGRD. Specifically, PCFGRD first perturbs the sensitive attributes using Local Differential Privacy (LDP). Then it employs forward correction loss to train an accurate sensitive attributes estimator. We conduct extensive experiments and the experiment results show that it outperforms other alternatives in balancing utility and fairness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-wang25e, title = {Counterfacual Fairness for Graph Neural Networks with Limited and Privacy Protected Sensitive Attributes}, author = {Wang, Xuemin and Wang, Lei and Gu, Tianlong and Bao, Xuguang}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {719--734}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/wang25e/wang25e.pdf}, url = {https://proceedings.mlr.press/v260/wang25e.html}, abstract = {Graph Neural Networks (GNNs) have shown outstanding performance in learning graph representations, which increases their application in high-risk areas. However, GNNs may inherit biases from the graph data and make unfair predictions towards the protected sub-groups. To eliminate bias, a natural idea is to achieve counterfactual fairness from a causal perspective. Concretely, counterfactual fairness requires sufficient sensitive attributes as guidance, which is infeasible in the real world. The reason is that users with various privacy preferences may selectively publish their sensitive attributes and only limited sensitive attributes can be collected. Besides, the users who publish sensitive attributes still face privacy risks. In this paper, we first consider the situation in which the sensitive attributes are limited and propose a framework called PCFGR (Partially observed sensitive Attributes in Counterfactual Fair Graph Representation Learning) to learn fair graph representation from limited sensitive attributes. The framework trains a sensitive attribute estimator, which is applied to provide sufficient and accurate sensitive attributes. With these sensitive attributes, it can generate counterfactuals and eliminate the bias efficiently. Secondly, we aim to protect the privacy of the sensitive attributes and further propose PCFGR$\backslash$D. Specifically, PCFGR$\backslash$D first perturbs the sensitive attributes using Local Differential Privacy (LDP). Then it employs forward correction loss to train an accurate sensitive attributes estimator. We conduct extensive experiments and the experiment results show that it outperforms other alternatives in balancing utility and fairness.} }
Endnote
%0 Conference Paper %T Counterfacual Fairness for Graph Neural Networks with Limited and Privacy Protected Sensitive Attributes %A Xuemin Wang %A Lei Wang %A Tianlong Gu %A Xuguang Bao %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-wang25e %I PMLR %P 719--734 %U https://proceedings.mlr.press/v260/wang25e.html %V 260 %X Graph Neural Networks (GNNs) have shown outstanding performance in learning graph representations, which increases their application in high-risk areas. However, GNNs may inherit biases from the graph data and make unfair predictions towards the protected sub-groups. To eliminate bias, a natural idea is to achieve counterfactual fairness from a causal perspective. Concretely, counterfactual fairness requires sufficient sensitive attributes as guidance, which is infeasible in the real world. The reason is that users with various privacy preferences may selectively publish their sensitive attributes and only limited sensitive attributes can be collected. Besides, the users who publish sensitive attributes still face privacy risks. In this paper, we first consider the situation in which the sensitive attributes are limited and propose a framework called PCFGR (Partially observed sensitive Attributes in Counterfactual Fair Graph Representation Learning) to learn fair graph representation from limited sensitive attributes. The framework trains a sensitive attribute estimator, which is applied to provide sufficient and accurate sensitive attributes. With these sensitive attributes, it can generate counterfactuals and eliminate the bias efficiently. Secondly, we aim to protect the privacy of the sensitive attributes and further propose PCFGR$\backslash$D. Specifically, PCFGR$\backslash$D first perturbs the sensitive attributes using Local Differential Privacy (LDP). Then it employs forward correction loss to train an accurate sensitive attributes estimator. We conduct extensive experiments and the experiment results show that it outperforms other alternatives in balancing utility and fairness.
APA
Wang, X., Wang, L., Gu, T. & Bao, X.. (2025). Counterfacual Fairness for Graph Neural Networks with Limited and Privacy Protected Sensitive Attributes. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:719-734 Available from https://proceedings.mlr.press/v260/wang25e.html.

Related Material