Fair SA: Sensitivity Analysis for Fairness in Face Recognition

Aparna R. Joshi, Xavier Suau Cuadros, Nivedha Sivakumar, Luca Zappella, Nicholas Apostoloff
Proceedings of The Algorithmic Fairness through the Lens of Causality and Robustness, PMLR 171:40-58, 2022.

Abstract

As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition algorithm. While traditional summary statistics suggest that the aggregate performance of face recognition models has continued to improve, these metrics do not directly measure the robustness or fairness of the models. Visual Psychophysics Sensitivity Analysis (VPSA) (19) provides a way to pinpoint the individual causes of failure by way of introducing incremental perturbations in the data. However, perturbations may affect subgroups differently. In this paper, we propose a new fairness evaluation based on robustness in the form of a generic framework that extends VPSA. With this framework, we can analyze the ability of a model to perform fairly for different subgroups of a population affected by perturbations, and pinpoint the exact failure modes for a subgroup by measuring targeted robustness. With the increasing focus on the fairness of models, we use face recognition as an example application of our framework and propose to compactly visualize the fairness analysis of a model via AUC matrices. We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed, thereby uncovering trends that were not visible using the model’s performance on subgroups without perturbations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v171-joshi22a, title = {Fair {SA}: Sensitivity Analysis for Fairness in Face Recognition}, author = {Joshi, Aparna R. and Suau Cuadros, Xavier and Sivakumar, Nivedha and Zappella, Luca and Apostoloff, Nicholas}, booktitle = {Proceedings of The Algorithmic Fairness through the Lens of Causality and Robustness}, pages = {40--58}, year = {2022}, editor = {Schrouff, Jessica and Dieng, Awa and Rateike, Miriam and Kwegyir-Aggrey, Kweku and Farnadi, Golnoosh}, volume = {171}, series = {Proceedings of Machine Learning Research}, month = {13 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v171/joshi22a/joshi22a.pdf}, url = {https://proceedings.mlr.press/v171/joshi22a.html}, abstract = {As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition algorithm. While traditional summary statistics suggest that the aggregate performance of face recognition models has continued to improve, these metrics do not directly measure the robustness or fairness of the models. Visual Psychophysics Sensitivity Analysis (VPSA) (19) provides a way to pinpoint the individual causes of failure by way of introducing incremental perturbations in the data. However, perturbations may affect subgroups differently. In this paper, we propose a new fairness evaluation based on robustness in the form of a generic framework that extends VPSA. With this framework, we can analyze the ability of a model to perform fairly for different subgroups of a population affected by perturbations, and pinpoint the exact failure modes for a subgroup by measuring targeted robustness. With the increasing focus on the fairness of models, we use face recognition as an example application of our framework and propose to compactly visualize the fairness analysis of a model via AUC matrices. We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed, thereby uncovering trends that were not visible using the model’s performance on subgroups without perturbations.} }
Endnote
%0 Conference Paper %T Fair SA: Sensitivity Analysis for Fairness in Face Recognition %A Aparna R. Joshi %A Xavier Suau Cuadros %A Nivedha Sivakumar %A Luca Zappella %A Nicholas Apostoloff %B Proceedings of The Algorithmic Fairness through the Lens of Causality and Robustness %C Proceedings of Machine Learning Research %D 2022 %E Jessica Schrouff %E Awa Dieng %E Miriam Rateike %E Kweku Kwegyir-Aggrey %E Golnoosh Farnadi %F pmlr-v171-joshi22a %I PMLR %P 40--58 %U https://proceedings.mlr.press/v171/joshi22a.html %V 171 %X As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition algorithm. While traditional summary statistics suggest that the aggregate performance of face recognition models has continued to improve, these metrics do not directly measure the robustness or fairness of the models. Visual Psychophysics Sensitivity Analysis (VPSA) (19) provides a way to pinpoint the individual causes of failure by way of introducing incremental perturbations in the data. However, perturbations may affect subgroups differently. In this paper, we propose a new fairness evaluation based on robustness in the form of a generic framework that extends VPSA. With this framework, we can analyze the ability of a model to perform fairly for different subgroups of a population affected by perturbations, and pinpoint the exact failure modes for a subgroup by measuring targeted robustness. With the increasing focus on the fairness of models, we use face recognition as an example application of our framework and propose to compactly visualize the fairness analysis of a model via AUC matrices. We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed, thereby uncovering trends that were not visible using the model’s performance on subgroups without perturbations.
APA
Joshi, A.R., Suau Cuadros, X., Sivakumar, N., Zappella, L. & Apostoloff, N.. (2022). Fair SA: Sensitivity Analysis for Fairness in Face Recognition. Proceedings of The Algorithmic Fairness through the Lens of Causality and Robustness, in Proceedings of Machine Learning Research 171:40-58 Available from https://proceedings.mlr.press/v171/joshi22a.html.

Related Material