Fair Supervised Learning with A Simple Random Sampler of Sensitive Attributes

Jinwon Sohn, Qifan Song, Guang Lin
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1594-1602, 2024.

Abstract

As the data-driven decision process becomes dominating for industrial applications, fairness-aware machine learning arouses great attention in various areas. This work proposes fairness penalties learned by neural networks with a simple random sampler of sensitive attributes for non-discriminatory supervised learning. In contrast to many existing works that critically rely on the discreteness of sensitive attributes and response variables, the proposed penalty is able to handle versatile formats of the sensitive attributes, so it is more extensively applicable in practice than many existing algorithms. This penalty enables us to build a computationally efficient group-level in-processing fairness-aware training framework. Empirical evidence shows that our framework enjoys better utility and fairness measures on popular benchmark data sets than competing methods. We also theoretically characterize estimation errors and loss of utility of the proposed neural-penalized risk minimization problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-sohn24a, title = { Fair Supervised Learning with A Simple Random Sampler of Sensitive Attributes }, author = {Sohn, Jinwon and Song, Qifan and Lin, Guang}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1594--1602}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/sohn24a/sohn24a.pdf}, url = {https://proceedings.mlr.press/v238/sohn24a.html}, abstract = { As the data-driven decision process becomes dominating for industrial applications, fairness-aware machine learning arouses great attention in various areas. This work proposes fairness penalties learned by neural networks with a simple random sampler of sensitive attributes for non-discriminatory supervised learning. In contrast to many existing works that critically rely on the discreteness of sensitive attributes and response variables, the proposed penalty is able to handle versatile formats of the sensitive attributes, so it is more extensively applicable in practice than many existing algorithms. This penalty enables us to build a computationally efficient group-level in-processing fairness-aware training framework. Empirical evidence shows that our framework enjoys better utility and fairness measures on popular benchmark data sets than competing methods. We also theoretically characterize estimation errors and loss of utility of the proposed neural-penalized risk minimization problem. } }
Endnote
%0 Conference Paper %T Fair Supervised Learning with A Simple Random Sampler of Sensitive Attributes %A Jinwon Sohn %A Qifan Song %A Guang Lin %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-sohn24a %I PMLR %P 1594--1602 %U https://proceedings.mlr.press/v238/sohn24a.html %V 238 %X As the data-driven decision process becomes dominating for industrial applications, fairness-aware machine learning arouses great attention in various areas. This work proposes fairness penalties learned by neural networks with a simple random sampler of sensitive attributes for non-discriminatory supervised learning. In contrast to many existing works that critically rely on the discreteness of sensitive attributes and response variables, the proposed penalty is able to handle versatile formats of the sensitive attributes, so it is more extensively applicable in practice than many existing algorithms. This penalty enables us to build a computationally efficient group-level in-processing fairness-aware training framework. Empirical evidence shows that our framework enjoys better utility and fairness measures on popular benchmark data sets than competing methods. We also theoretically characterize estimation errors and loss of utility of the proposed neural-penalized risk minimization problem.
APA
Sohn, J., Song, Q. & Lin, G.. (2024). Fair Supervised Learning with A Simple Random Sampler of Sensitive Attributes . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1594-1602 Available from https://proceedings.mlr.press/v238/sohn24a.html.

Related Material