The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making

Camilla Quaresmini, Eugenia Villa, Valentina Breschi, Viola Schiaffonati, Mara Tanelli
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:116-134, 2025.

Abstract

Algorithmic fairness is an expanding field that addresses a range of discrimination issues associated with algorithmic processes. However, most works in the literature focus on analyzing it only from an ethical perspective, focusing on moral principles and values that should be considered in the design and evaluation of algorithms, while disregarding the epistemic dimension related to knowledge transmission and validation. However, this aspect of algorithmic fairness should also be included in the debate, as it is crucial to introduce a specific type of harm: an individual may be systematically excluded from the dissemination of knowledge due to the attribution of a credibility deficit/excess. In this work, we specifically focus on characterizing and analyzing the impact of this credibility deficit or excess on the diffusion of innovations on a societal scale, a phenomenon driven by individual attitudes and social interactions, and also by the strength of mutual connections. Indeed, discrimination might shape the latter, ultimately modifying how innovations spread within the network. In this light, to incorporate, also from a formal point of view, the epistemic dimension in innovation diffusion models becomes paramount, especially if these models are intended to support fair policy design. For these reasons, we formalize the epistemic properties of a social environment, by extending the well-established Linear Threshold Model (LTM) in an epistemic direction to show the impact of epistemic biases in innovation diffusion. Focusing on the impact of epistemic bias in both open-loop and closed-loop scenarios featuring optimal fostering policies, our results shed light on the pivotal role the epistemic dimension might have in the debate of algorithmic fairness in decision-making.

Cite this Paper


BibTeX
@InProceedings{pmlr-v294-quaresmini25a, title = {The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making}, author = {Quaresmini, Camilla and Villa, Eugenia and Breschi, Valentina and Schiaffonati, Viola and Tanelli, Mara}, booktitle = {Proceedings of Fourth European Workshop on Algorithmic Fairness}, pages = {116--134}, year = {2025}, editor = {Weerts, Hilde and Pechenizkiy, Mykola and Allhutter, Doris and CorrĂȘa, Ana Maria and Grote, Thomas and Liem, Cynthia}, volume = {294}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--02 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v294/main/assets/quaresmini25a/quaresmini25a.pdf}, url = {https://proceedings.mlr.press/v294/quaresmini25a.html}, abstract = {Algorithmic fairness is an expanding field that addresses a range of discrimination issues associated with algorithmic processes. However, most works in the literature focus on analyzing it only from an ethical perspective, focusing on moral principles and values that should be considered in the design and evaluation of algorithms, while disregarding the epistemic dimension related to knowledge transmission and validation. However, this aspect of algorithmic fairness should also be included in the debate, as it is crucial to introduce a specific type of harm: an individual may be systematically excluded from the dissemination of knowledge due to the attribution of a credibility deficit/excess. In this work, we specifically focus on characterizing and analyzing the impact of this credibility deficit or excess on the diffusion of innovations on a societal scale, a phenomenon driven by individual attitudes and social interactions, and also by the strength of mutual connections. Indeed, discrimination might shape the latter, ultimately modifying how innovations spread within the network. In this light, to incorporate, also from a formal point of view, the epistemic dimension in innovation diffusion models becomes paramount, especially if these models are intended to support fair policy design. For these reasons, we formalize the epistemic properties of a social environment, by extending the well-established Linear Threshold Model (LTM) in an epistemic direction to show the impact of epistemic biases in innovation diffusion. Focusing on the impact of epistemic bias in both open-loop and closed-loop scenarios featuring optimal fostering policies, our results shed light on the pivotal role the epistemic dimension might have in the debate of algorithmic fairness in decision-making.} }
Endnote
%0 Conference Paper %T The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making %A Camilla Quaresmini %A Eugenia Villa %A Valentina Breschi %A Viola Schiaffonati %A Mara Tanelli %B Proceedings of Fourth European Workshop on Algorithmic Fairness %C Proceedings of Machine Learning Research %D 2025 %E Hilde Weerts %E Mykola Pechenizkiy %E Doris Allhutter %E Ana Maria CorrĂȘa %E Thomas Grote %E Cynthia Liem %F pmlr-v294-quaresmini25a %I PMLR %P 116--134 %U https://proceedings.mlr.press/v294/quaresmini25a.html %V 294 %X Algorithmic fairness is an expanding field that addresses a range of discrimination issues associated with algorithmic processes. However, most works in the literature focus on analyzing it only from an ethical perspective, focusing on moral principles and values that should be considered in the design and evaluation of algorithms, while disregarding the epistemic dimension related to knowledge transmission and validation. However, this aspect of algorithmic fairness should also be included in the debate, as it is crucial to introduce a specific type of harm: an individual may be systematically excluded from the dissemination of knowledge due to the attribution of a credibility deficit/excess. In this work, we specifically focus on characterizing and analyzing the impact of this credibility deficit or excess on the diffusion of innovations on a societal scale, a phenomenon driven by individual attitudes and social interactions, and also by the strength of mutual connections. Indeed, discrimination might shape the latter, ultimately modifying how innovations spread within the network. In this light, to incorporate, also from a formal point of view, the epistemic dimension in innovation diffusion models becomes paramount, especially if these models are intended to support fair policy design. For these reasons, we formalize the epistemic properties of a social environment, by extending the well-established Linear Threshold Model (LTM) in an epistemic direction to show the impact of epistemic biases in innovation diffusion. Focusing on the impact of epistemic bias in both open-loop and closed-loop scenarios featuring optimal fostering policies, our results shed light on the pivotal role the epistemic dimension might have in the debate of algorithmic fairness in decision-making.
APA
Quaresmini, C., Villa, E., Breschi, V., Schiaffonati, V. & Tanelli, M.. (2025). The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making. Proceedings of Fourth European Workshop on Algorithmic Fairness, in Proceedings of Machine Learning Research 294:116-134 Available from https://proceedings.mlr.press/v294/quaresmini25a.html.

Related Material