Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts

Anna Capella‘ i Ricart
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:10-26, 2025.

Abstract

In this paper we take as a reference the AI Act and the EU Directives on standards for equality bodies (2024/1499 and 2024/1500) with the aim to analyse how institutions can play a role in developing fairer AI systems. In parallel, we study the relevance of equality and non-discrimination experts to convey the scope and complexity of some concepts that are used in the non-discrimination field (such as intersectionality or structural discrimination) to the AI discipline, because they are not always easily traduced. We examine these questions regarding certain provisions of the AI Act that involve data governance, redress measures, the development of AI systems, the assessment of the impact on fundamental rights and the investigation regarding discriminatory results of AI systems. Furthermore, we argue that algorithmic discrimination, by shedding new light on the complex, varied and interconnected mechanisms by which discrimination operates, is pressing non-discrimination law to evolve from a simpler structure to a more sophisticated approach to inequality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-ricart25a, title = {Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts}, author = {Capella` i Ricart, Anna}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {10--26}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and Corrêa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/ricart25a/ricart25a.pdf}, url = {https://proceedings.mlr.press/v294/ricart25a.html}, abstract = {In this paper we take as a reference the AI Act and the EU Directives on standards for equality bodies (2024/1499 and 2024/1500) with the aim to analyse how institutions can play a role in developing fairer AI systems. In parallel, we study the relevance of equality and non-discrimination experts to convey the scope and complexity of some concepts that are used in the non-discrimination field (such as intersectionality or structural discrimination) to the AI discipline, because they are not always easily traduced. We examine these questions regarding certain provisions of the AI Act that involve data governance, redress measures, the development of AI systems, the assessment of the impact on fundamental rights and the investigation regarding discriminatory results of AI systems. Furthermore, we argue that algorithmic discrimination, by shedding new light on the complex, varied and interconnected mechanisms by which discrimination operates, is pressing non-discrimination law to evolve from a simpler structure to a more sophisticated approach to inequality.} }
Endnote
%0 Conference Paper %T Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts %A Anna Capella‘ i Ricart %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria Corrêa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-ricart25a %I PMLR %P 10--26 %U https://proceedings.mlr.press/v294/ricart25a.html %V 294 %X In this paper we take as a reference the AI Act and the EU Directives on standards for equality bodies (2024/1499 and 2024/1500) with the aim to analyse how institutions can play a role in developing fairer AI systems. In parallel, we study the relevance of equality and non-discrimination experts to convey the scope and complexity of some concepts that are used in the non-discrimination field (such as intersectionality or structural discrimination) to the AI discipline, because they are not always easily traduced. We examine these questions regarding certain provisions of the AI Act that involve data governance, redress measures, the development of AI systems, the assessment of the impact on fundamental rights and the investigation regarding discriminatory results of AI systems. Furthermore, we argue that algorithmic discrimination, by shedding new light on the complex, varied and interconnected mechanisms by which discrimination operates, is pressing non-discrimination law to evolve from a simpler structure to a more sophisticated approach to inequality.
APA
Capella‘ i Ricart, A.. (2025). Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:10-26 Available from https://proceedings.mlr.press/v294/ricart25a.html.

Related Material