[edit]
Revealing and Reducing Morphological Biases Using Implicit Neural Representations for Medical Image Registration
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:4026-4041, 2026.
Abstract
Deep learning has enhanced medical image analysis, yet models trained on imbalanced or non-representative populations often exhibit systematic biases, which can lead to substantial performance disparities across patient subgroups. Addressing these disparities is essential to ensure fair and reliable model deployment in clinical practice. Particularly in medical imaging, population-level biases can oftentimes be attributed to morphological rather than intensity differences, such as sex-related differences in organ volume. Given that morphological biases in neuroimaging data spuriously correlate with the disease label, we show, that bias detection based on general foundation model features (e.g., CLIP and BiomedCLIP) insufficiently captures morphological biases. Therefore, we introduce a bias detection and mitigation pipeline that performs subgroup discovery on deformation representations from a generalizable implicit neural representation (INR). This proof-of-concept study indicates improved performance when using deformation representations instead of general image features for bias detection. Furthermore, our results show that re-balancing the training dataset using the identified subgroups, complemented by INR-generated samples for augmentation, helps to mitigate the bias effect.