Disparate Impact on Group Accuracy of Linearization for Private Inference

Saswat Das, Marco Romanelli, Ferdinando Fioretto
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:10168-10184, 2024.

Abstract

Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the finetuning step for linearized models can serve as an effective mitigation strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-das24d, title = {Disparate Impact on Group Accuracy of Linearization for Private Inference}, author = {Das, Saswat and Romanelli, Marco and Fioretto, Ferdinando}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {10168--10184}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24d/das24d.pdf}, url = {https://proceedings.mlr.press/v235/das24d.html}, abstract = {Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the finetuning step for linearized models can serve as an effective mitigation strategy.} }
Endnote
%0 Conference Paper %T Disparate Impact on Group Accuracy of Linearization for Private Inference %A Saswat Das %A Marco Romanelli %A Ferdinando Fioretto %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-das24d %I PMLR %P 10168--10184 %U https://proceedings.mlr.press/v235/das24d.html %V 235 %X Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the finetuning step for linearized models can serve as an effective mitigation strategy.
APA
Das, S., Romanelli, M. & Fioretto, F.. (2024). Disparate Impact on Group Accuracy of Linearization for Private Inference. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:10168-10184 Available from https://proceedings.mlr.press/v235/das24d.html.

Related Material