Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics

Shu Hu, George H. Chen
Proceedings of the 2nd Machine Learning for Health symposium, PMLR 193:62-87, 2022.

Abstract

We propose a general approach for training survival analysis models that minimizes a worst-case error across all subpopulations that are large enough (occurring with at least a user-specified minimum probability). This approach uses a training loss function that does not know any demographic information to treat as sensitive. Despite this, we demonstrate that our proposed approach often scores better on recently established fairness metrics (without a significant drop in prediction accuracy) compared to various baselines, including ones which directly use sensitive demographic information in their training loss. Our code is available at: https://github.com/discovershu/DRO_COX

Cite this Paper


BibTeX
@InProceedings{pmlr-v193-hu22a, title = {Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics}, author = {Hu, Shu and Chen, George H.}, booktitle = {Proceedings of the 2nd Machine Learning for Health symposium}, pages = {62--87}, year = {2022}, editor = {Parziale, Antonio and Agrawal, Monica and Joshi, Shalmali and Chen, Irene Y. and Tang, Shengpu and Oala, Luis and Subbaswamy, Adarsh}, volume = {193}, series = {Proceedings of Machine Learning Research}, month = {28 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v193/hu22a/hu22a.pdf}, url = {https://proceedings.mlr.press/v193/hu22a.html}, abstract = {We propose a general approach for training survival analysis models that minimizes a worst-case error across all subpopulations that are large enough (occurring with at least a user-specified minimum probability). This approach uses a training loss function that does not know any demographic information to treat as sensitive. Despite this, we demonstrate that our proposed approach often scores better on recently established fairness metrics (without a significant drop in prediction accuracy) compared to various baselines, including ones which directly use sensitive demographic information in their training loss. Our code is available at: https://github.com/discovershu/DRO_COX} }
Endnote
%0 Conference Paper %T Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics %A Shu Hu %A George H. Chen %B Proceedings of the 2nd Machine Learning for Health symposium %C Proceedings of Machine Learning Research %D 2022 %E Antonio Parziale %E Monica Agrawal %E Shalmali Joshi %E Irene Y. Chen %E Shengpu Tang %E Luis Oala %E Adarsh Subbaswamy %F pmlr-v193-hu22a %I PMLR %P 62--87 %U https://proceedings.mlr.press/v193/hu22a.html %V 193 %X We propose a general approach for training survival analysis models that minimizes a worst-case error across all subpopulations that are large enough (occurring with at least a user-specified minimum probability). This approach uses a training loss function that does not know any demographic information to treat as sensitive. Despite this, we demonstrate that our proposed approach often scores better on recently established fairness metrics (without a significant drop in prediction accuracy) compared to various baselines, including ones which directly use sensitive demographic information in their training loss. Our code is available at: https://github.com/discovershu/DRO_COX
APA
Hu, S. & Chen, G.H.. (2022). Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics. Proceedings of the 2nd Machine Learning for Health symposium, in Proceedings of Machine Learning Research 193:62-87 Available from https://proceedings.mlr.press/v193/hu22a.html.

Related Material