Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

Andrew C. Miller, Leon A. Gatys, Joseph Futoma, Emily Fox
Proceedings of the 6th Machine Learning for Healthcare Conference, PMLR 149:308-336, 2021.

Abstract

Machine learning models — now commonly developed to screen, diagnose, or predict health conditions — are evaluated with a variety of performance metrics. An important first step in assessing the practical utility of a model is to evaluate its average performance over a population of interest. In many settings, it is also critical that the model makes good predictions within predefined subpopulations. For instance, showing that a model is fair or equitable requires evaluating the model’s performance in different demographic subgroups. However, subpopulation performance metrics are typically computed using only data from that subgroup, resulting in higher variance estimates for smaller groups. We devise a procedure to measure subpopulation performance that can be more sample-efficient than the typical estimator. We propose using an evaluation model — a model that describes the conditional distribution of the predictive model score — to form model-based metric (MBM) estimates. Our procedure incorporates model checking and validation, and we propose a computationally efficient approximation of the traditional nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two tasks: a semi-synthetic setting where ground truth metrics are available and a real-world hospital readmission prediction task. We find that MBMs consistently produce more accurate and lower variance estimates of model performance, particularly for small subpopulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v149-miller21a, title = {Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance}, author = {Miller, Andrew C. and Gatys, Leon A. and Futoma, Joseph and Fox, Emily}, booktitle = {Proceedings of the 6th Machine Learning for Healthcare Conference}, pages = {308--336}, year = {2021}, editor = {Jung, Ken and Yeung, Serena and Sendak, Mark and Sjoding, Michael and Ranganath, Rajesh}, volume = {149}, series = {Proceedings of Machine Learning Research}, month = {06--07 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v149/miller21a/miller21a.pdf}, url = {https://proceedings.mlr.press/v149/miller21a.html}, abstract = {Machine learning models — now commonly developed to screen, diagnose, or predict health conditions — are evaluated with a variety of performance metrics. An important first step in assessing the practical utility of a model is to evaluate its average performance over a population of interest. In many settings, it is also critical that the model makes good predictions within predefined subpopulations. For instance, showing that a model is fair or equitable requires evaluating the model’s performance in different demographic subgroups. However, subpopulation performance metrics are typically computed using only data from that subgroup, resulting in higher variance estimates for smaller groups. We devise a procedure to measure subpopulation performance that can be more sample-efficient than the typical estimator. We propose using an evaluation model — a model that describes the conditional distribution of the predictive model score — to form model-based metric (MBM) estimates. Our procedure incorporates model checking and validation, and we propose a computationally efficient approximation of the traditional nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two tasks: a semi-synthetic setting where ground truth metrics are available and a real-world hospital readmission prediction task. We find that MBMs consistently produce more accurate and lower variance estimates of model performance, particularly for small subpopulations.} }
Endnote
%0 Conference Paper %T Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance %A Andrew C. Miller %A Leon A. Gatys %A Joseph Futoma %A Emily Fox %B Proceedings of the 6th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2021 %E Ken Jung %E Serena Yeung %E Mark Sendak %E Michael Sjoding %E Rajesh Ranganath %F pmlr-v149-miller21a %I PMLR %P 308--336 %U https://proceedings.mlr.press/v149/miller21a.html %V 149 %X Machine learning models — now commonly developed to screen, diagnose, or predict health conditions — are evaluated with a variety of performance metrics. An important first step in assessing the practical utility of a model is to evaluate its average performance over a population of interest. In many settings, it is also critical that the model makes good predictions within predefined subpopulations. For instance, showing that a model is fair or equitable requires evaluating the model’s performance in different demographic subgroups. However, subpopulation performance metrics are typically computed using only data from that subgroup, resulting in higher variance estimates for smaller groups. We devise a procedure to measure subpopulation performance that can be more sample-efficient than the typical estimator. We propose using an evaluation model — a model that describes the conditional distribution of the predictive model score — to form model-based metric (MBM) estimates. Our procedure incorporates model checking and validation, and we propose a computationally efficient approximation of the traditional nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two tasks: a semi-synthetic setting where ground truth metrics are available and a real-world hospital readmission prediction task. We find that MBMs consistently produce more accurate and lower variance estimates of model performance, particularly for small subpopulations.
APA
Miller, A.C., Gatys, L.A., Futoma, J. & Fox, E.. (2021). Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance. Proceedings of the 6th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 149:308-336 Available from https://proceedings.mlr.press/v149/miller21a.html.

Related Material