[edit]
Equality insights in the development of fairer high-risk AI systems and the control of its discriminatory impacts
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:10-26, 2025.
Abstract
In this paper we take as a reference the AI Act and the EU Directives on standards for equality bodies (2024/1499 and 2024/1500) with the aim to analyse how institutions can play a role in developing fairer AI systems. In parallel, we study the relevance of equality and non-discrimination experts to convey the scope and complexity of some concepts that are used in the non-discrimination field (such as intersectionality or structural discrimination) to the AI discipline, because they are not always easily traduced. We examine these questions regarding certain provisions of the AI Act that involve data governance, redress measures, the development of AI systems, the assessment of the impact on fundamental rights and the investigation regarding discriminatory results of AI systems. Furthermore, we argue that algorithmic discrimination, by shedding new light on the complex, varied and interconnected mechanisms by which discrimination operates, is pressing non-discrimination law to evolve from a simpler structure to a more sophisticated approach to inequality.