FairProof : Confidential and Certifiable Fairness for Neural Networks

Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:55682-55705, 2024.

Abstract

Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose Fairproof – a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement Fairproof in Gnark and demonstrate empirically that our system is practically feasible. Code is available at https://github.com/infinite-pursuits/FairProof.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-yadav24a, title = {{F}air{P}roof : Confidential and Certifiable Fairness for Neural Networks}, author = {Yadav, Chhavi and Roy Chowdhury, Amrita and Boneh, Dan and Chaudhuri, Kamalika}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {55682--55705}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/yadav24a/yadav24a.pdf}, url = {https://proceedings.mlr.press/v235/yadav24a.html}, abstract = {Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose Fairproof – a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement Fairproof in Gnark and demonstrate empirically that our system is practically feasible. Code is available at https://github.com/infinite-pursuits/FairProof.} }
Endnote
%0 Conference Paper %T FairProof : Confidential and Certifiable Fairness for Neural Networks %A Chhavi Yadav %A Amrita Roy Chowdhury %A Dan Boneh %A Kamalika Chaudhuri %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-yadav24a %I PMLR %P 55682--55705 %U https://proceedings.mlr.press/v235/yadav24a.html %V 235 %X Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose Fairproof – a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement Fairproof in Gnark and demonstrate empirically that our system is practically feasible. Code is available at https://github.com/infinite-pursuits/FairProof.
APA
Yadav, C., Roy Chowdhury, A., Boneh, D. & Chaudhuri, K.. (2024). FairProof : Confidential and Certifiable Fairness for Neural Networks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:55682-55705 Available from https://proceedings.mlr.press/v235/yadav24a.html.

Related Material