Stable Fair Graph Representation Learning with Lipschitz Constraint

Qiang Chen, Zhongze Wu, Xiu Su, Xi Lin, Zhe Qu, Shan You, Shuo Yang, Chang Xu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:9354-9376, 2025.

Abstract

Group fairness based on adversarial training has gained significant attention on graph data, which was implemented by masking sensitive attributes to generate fair feature views. However, existing models suffer from training instability due to uncertainty of the generated masks and the trade-off between fairness and utility. In this work, we propose a stable fair Graph Neural Network (SFG) to maintain training stability while preserving accuracy and fairness performance. Specifically, we first theoretically derive a tight upper Lipschitz bound to control the stability of existing adversarial-based models and employ a stochastic projected subgradient algorithm to constrain the bound, which operates in a block-coordinate manner. Additionally, we construct the uncertainty set to train the model, which can prevent unstable training by dropping some overfitting nodes caused by chasing fairness. Extensive experiments conducted on three real-world datasets demonstrate that SFG is stable and outperforms other state-of-the-art adversarial-based methods in terms of both fairness and utility performance. Codes are available at https://github.com/sh-qiangchen/SFG.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chen25bs, title = {Stable Fair Graph Representation Learning with {L}ipschitz Constraint}, author = {Chen, Qiang and Wu, Zhongze and Su, Xiu and Lin, Xi and Qu, Zhe and You, Shan and Yang, Shuo and Xu, Chang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {9354--9376}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chen25bs/chen25bs.pdf}, url = {https://proceedings.mlr.press/v267/chen25bs.html}, abstract = {Group fairness based on adversarial training has gained significant attention on graph data, which was implemented by masking sensitive attributes to generate fair feature views. However, existing models suffer from training instability due to uncertainty of the generated masks and the trade-off between fairness and utility. In this work, we propose a stable fair Graph Neural Network (SFG) to maintain training stability while preserving accuracy and fairness performance. Specifically, we first theoretically derive a tight upper Lipschitz bound to control the stability of existing adversarial-based models and employ a stochastic projected subgradient algorithm to constrain the bound, which operates in a block-coordinate manner. Additionally, we construct the uncertainty set to train the model, which can prevent unstable training by dropping some overfitting nodes caused by chasing fairness. Extensive experiments conducted on three real-world datasets demonstrate that SFG is stable and outperforms other state-of-the-art adversarial-based methods in terms of both fairness and utility performance. Codes are available at https://github.com/sh-qiangchen/SFG.} }
Endnote
%0 Conference Paper %T Stable Fair Graph Representation Learning with Lipschitz Constraint %A Qiang Chen %A Zhongze Wu %A Xiu Su %A Xi Lin %A Zhe Qu %A Shan You %A Shuo Yang %A Chang Xu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chen25bs %I PMLR %P 9354--9376 %U https://proceedings.mlr.press/v267/chen25bs.html %V 267 %X Group fairness based on adversarial training has gained significant attention on graph data, which was implemented by masking sensitive attributes to generate fair feature views. However, existing models suffer from training instability due to uncertainty of the generated masks and the trade-off between fairness and utility. In this work, we propose a stable fair Graph Neural Network (SFG) to maintain training stability while preserving accuracy and fairness performance. Specifically, we first theoretically derive a tight upper Lipschitz bound to control the stability of existing adversarial-based models and employ a stochastic projected subgradient algorithm to constrain the bound, which operates in a block-coordinate manner. Additionally, we construct the uncertainty set to train the model, which can prevent unstable training by dropping some overfitting nodes caused by chasing fairness. Extensive experiments conducted on three real-world datasets demonstrate that SFG is stable and outperforms other state-of-the-art adversarial-based methods in terms of both fairness and utility performance. Codes are available at https://github.com/sh-qiangchen/SFG.
APA
Chen, Q., Wu, Z., Su, X., Lin, X., Qu, Z., You, S., Yang, S. & Xu, C.. (2025). Stable Fair Graph Representation Learning with Lipschitz Constraint. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:9354-9376 Available from https://proceedings.mlr.press/v267/chen25bs.html.

Related Material