[edit]
Debiasing Global Workspace: A Cognitive Neural Framework for Learning Debiased and Interpretable Representations
Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, PMLR 285:85-99, 2024.
Abstract
When trained on biased datasets, Deep Neural Networks (DNNs) often make predictions based on attributes derived from features spuriously correlated with the target labels. This is especially problematic if these irrelevant features are easier for the model to learn than the truly relevant ones. Many existing approaches, called debiasing methods, have been proposed to address this issue, but they often require predefined bias labels and entail significantly increased computational complexity by incorporating extra auxiliary models. Instead, we provide an orthogonal perspective from the existing approaches, inspired by cognitive science, specifically Global Workspace Theory (GWT). Our method, Debiasing Global Workspace (DGW), is a novel debiasing framework that consists of specialized modules and a shared workspace, allowing for increased modularity and improved debiasing performance. Additionally, DGW enhances the transparency of decision-making processes by visualizing which features of the inputs the model focuses on during training and inference through attention masks. We begin by proposing an instantiation of GWT for the debiasing method. We then outline the implementation of each component within DGW. At the end, we validate our method across various biased datasets, proving its effectiveness in mitigating biases and improving model performance.