Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model

Jean-Rémy Conti, Nathan Noiry, Stephan Clemencon, Vincent Despiegel, Stéphane Gentric
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:4344-4369, 2022.

Abstract

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-conti22a, title = {Mitigating Gender Bias in Face Recognition using the von Mises-{F}isher Mixture Model}, author = {Conti, Jean-R{\'e}my and Noiry, Nathan and Clemencon, Stephan and Despiegel, Vincent and Gentric, St{\'e}phane}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {4344--4369}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/conti22a/conti22a.pdf}, url = {https://proceedings.mlr.press/v162/conti22a.html}, abstract = {In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.} }
Endnote
%0 Conference Paper %T Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model %A Jean-Rémy Conti %A Nathan Noiry %A Stephan Clemencon %A Vincent Despiegel %A Stéphane Gentric %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-conti22a %I PMLR %P 4344--4369 %U https://proceedings.mlr.press/v162/conti22a.html %V 162 %X In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
APA
Conti, J., Noiry, N., Clemencon, S., Despiegel, V. & Gentric, S.. (2022). Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:4344-4369 Available from https://proceedings.mlr.press/v162/conti22a.html.

Related Material