[edit]
Harmful Impacts of ML: Empirically Triangulating the Concerns and Practices of Developers
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:27-63, 2025.
Abstract
Machine learning (ML) models used in decision-making tasks are known to bear harmful impacts. To tackle such impact, researchers have focused on developing tools to mitigate algorithmic fairness issues and to support ML developers in their algorithmic fairness-centered practices. Yet, little has been triangulated about the concerns and practices of ML developers towards the broader impact of ML that arises from complex questions of distributive unfairness and unsustainable pillars underlying ML models (e.g., opaque task formulation, inappropriate datasets, energy-intensive infrastructures). In this qualitative study, we conducted 30 semi-structured interviews using a convenience sampling of developers with varying educational backgrounds and varying experience with ML and algorithmic fairness. We surface (mis)conceptions and (questionable) practices around harms and their mitigation. Our study reveals no standard across developers’ concerns and practices, and tensions developers face when attempting to curb the undesirable impacts of ML models. These insights triangulate prior results on algorithmic fairness and shed light on various unsolved theoretical, design, methodological, and governance challenges. Our findings constitute a vital step forward to support developers and our broader community in navigating this growing, increasingly ubiquitous, footprint of ML.