Verifying Individual Fairness in Machine Learning Models

Philips George John, Deepak Vijaykeerthy, Diptikalyan Saha
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:749-758, 2020.

Abstract

We consider the problem of whether a given decision model, working with structured data, has individual fairness. Following the work of Dwork, a model is individually biased (or unfair) if there is a pair of valid inputs which are close to each other (according to an appropriate metric) but are treated differently by the model (different class label, or large difference in output), and it is unbiased (or fair) if no such pair exists. Our objective is to construct verifiers for proving individual fairness of a given model, and we do so by considering appropriate relaxations of the problem. We construct verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers. We also report the experimental results of evaluating our proposed algorithms on publicly available datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-george-john20a, title = {Verifying Individual Fairness in Machine Learning Models}, author = {George John, Philips and Vijaykeerthy, Deepak and Saha, Diptikalyan}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {749--758}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/george-john20a/george-john20a.pdf}, url = { http://proceedings.mlr.press/v124/george-john20a.html }, abstract = {We consider the problem of whether a given decision model, working with structured data, has individual fairness. Following the work of Dwork, a model is individually biased (or unfair) if there is a pair of valid inputs which are close to each other (according to an appropriate metric) but are treated differently by the model (different class label, or large difference in output), and it is unbiased (or fair) if no such pair exists. Our objective is to construct verifiers for proving individual fairness of a given model, and we do so by considering appropriate relaxations of the problem. We construct verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers. We also report the experimental results of evaluating our proposed algorithms on publicly available datasets.} }
Endnote
%0 Conference Paper %T Verifying Individual Fairness in Machine Learning Models %A Philips George John %A Deepak Vijaykeerthy %A Diptikalyan Saha %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-george-john20a %I PMLR %P 749--758 %U http://proceedings.mlr.press/v124/george-john20a.html %V 124 %X We consider the problem of whether a given decision model, working with structured data, has individual fairness. Following the work of Dwork, a model is individually biased (or unfair) if there is a pair of valid inputs which are close to each other (according to an appropriate metric) but are treated differently by the model (different class label, or large difference in output), and it is unbiased (or fair) if no such pair exists. Our objective is to construct verifiers for proving individual fairness of a given model, and we do so by considering appropriate relaxations of the problem. We construct verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers. We also report the experimental results of evaluating our proposed algorithms on publicly available datasets.
APA
George John, P., Vijaykeerthy, D. & Saha, D.. (2020). Verifying Individual Fairness in Machine Learning Models. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:749-758 Available from http://proceedings.mlr.press/v124/george-john20a.html .

Related Material