[edit]
Situating and Understanding Machine Unlearning, Ethically
Proceedings of Fourth European Workshop on Algorithmic Fairness, PMLR 294:362-368, 2025.
Abstract
Machine Unlearning (MU) aims to remove unwanted data and its effects from machine learning models while preserving performance. Driven by ethical and legal concerns such as privacy (Right to be Forgotten), security, bias mitigation, and copyright protection, MU faces challenges, including technical limitations, ethical ambiguities, and conflicting stakeholder expectations. This paper critically examines MU’s motivations and effectiveness, arguing that it remains unclear 1) what MU does, 2) what it should do, and 3) how efforts and goals fit together. To clarify, I introduce a tripartite epistemological distinction: 1) never knowing X, 2) learning X and forgetting it, and 3) acting as if one doesn’t know X. Analyzing cases of copyright, data privacy, and intellectual property, the paper shows inconsistencies between MU’s goals and outcomes, stressing a need for clearer ethics, stakeholder engagement, and transparency. Refining MU is crucial to ensuring that it effectively serves its intended purposes.