[edit]
Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics
Proceedings of the 2nd Machine Learning for Health symposium, PMLR 193:62-87, 2022.
Abstract
We propose a general approach for training survival analysis models that minimizes a worst-case error across all subpopulations that are large enough (occurring with at least a user-specified minimum probability). This approach uses a training loss function that does not know any demographic information to treat as sensitive. Despite this, we demonstrate that our proposed approach often scores better on recently established fairness metrics (without a significant drop in prediction accuracy) compared to various baselines, including ones which directly use sensitive demographic information in their training loss. Our code is available at: https://github.com/discovershu/DRO_COX