Invisible Inequalities - Intersectional Fairness in Educational AI

Marie Mirsch, Jonas Strube, Carmen Leicht-Scholten
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:403-409, 2025.

Abstract

Drawing on feminist theories of Intersectionality, this paper explores how single-axis approaches to fairness assessments obscure the experiences of individuals facing intersecting forms of discrimination. Three case studies in educational AI illustrate how individuals’ social embeddedness shapes their educational trajectories and why fairness metrics often fail to account for these complexities. The paper argues that addressing invisible inequalities requires a shift from purely technical solutions to context-sensitive fairness evaluations that , section = {Extended Abstracts}center on the lived experiences of marginalized people.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-mirsch25a, title = {Invisible Inequalities - Intersectional Fairness in Educational AI}, author = {Mirsch, Marie and Strube, Jonas and Leicht-Scholten, Carmen}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {403--409}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and Corrêa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/mirsch25a/mirsch25a.pdf}, url = {https://proceedings.mlr.press/v294/mirsch25a.html}, abstract = {Drawing on feminist theories of Intersectionality, this paper explores how single-axis approaches to fairness assessments obscure the experiences of individuals facing intersecting forms of discrimination. Three case studies in educational AI illustrate how individuals’ social embeddedness shapes their educational trajectories and why fairness metrics often fail to account for these complexities. The paper argues that addressing invisible inequalities requires a shift from purely technical solutions to context-sensitive fairness evaluations that , section = {Extended Abstracts}center on the lived experiences of marginalized people.} }
Endnote
%0 Conference Paper %T Invisible Inequalities - Intersectional Fairness in Educational AI %A Marie Mirsch %A Jonas Strube %A Carmen Leicht-Scholten %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria Corrêa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-mirsch25a %I PMLR %P 403--409 %U https://proceedings.mlr.press/v294/mirsch25a.html %V 294 %X Drawing on feminist theories of Intersectionality, this paper explores how single-axis approaches to fairness assessments obscure the experiences of individuals facing intersecting forms of discrimination. Three case studies in educational AI illustrate how individuals’ social embeddedness shapes their educational trajectories and why fairness metrics often fail to account for these complexities. The paper argues that addressing invisible inequalities requires a shift from purely technical solutions to context-sensitive fairness evaluations that , section = {Extended Abstracts}center on the lived experiences of marginalized people.
APA
Mirsch, M., Strube, J. & Leicht-Scholten, C.. (2025). Invisible Inequalities - Intersectional Fairness in Educational AI. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:403-409 Available from https://proceedings.mlr.press/v294/mirsch25a.html.

Related Material