Early Exit Ensembles for Uncertainty Quantification

Lorena Qendro, Alexander Campbell, Pietro Lio, Cecilia Mascolo
Proceedings of Machine Learning for Health, PMLR 158:181-195, 2021.

Abstract

Deep learning is increasingly used for decision-making in health applications. However, commonly used deep learning models are deterministic and are unable to provide any estimate of predictive uncertainty. Quantifying model uncertainty is crucial for reducing the risk of misdiagnosis by informing practitioners of low-confident predictions. To address this issue, we propose early exit ensembles, a novel framework capable of capturing predictive uncertainty via an implicit ensemble of early exits. We evaluate our approach on the task of classification using three state-of-the-art deep learning architectures applied to three medical imaging datasets. Our experiments show that early exit ensembles provide better-calibrated uncertainty compared to Monte Carlo dropout and deep ensembles using just a single forward-pass of the model. Depending on the dataset and baseline, early exit ensembles can improve uncertainty metrics up to 2x, while increasing accuracy by up to 2% over its single model counterpart. Finally, our results suggest that by providing well-calibrated predictive uncertainty for both in- and out-of-distribution inputs, early exit ensembles have the potential to improve trustworthiness of models in high-risk medical decision-making.

Cite this Paper


BibTeX
@InProceedings{pmlr-v158-qendro21a, title = {Early Exit Ensembles for Uncertainty Quantification}, author = {Qendro, Lorena and Campbell, Alexander and Lio, Pietro and Mascolo, Cecilia}, booktitle = {Proceedings of Machine Learning for Health}, pages = {181--195}, year = {2021}, editor = {Roy, Subhrajit and Pfohl, Stephen and Rocheteau, Emma and Tadesse, Girmaw Abebe and Oala, Luis and Falck, Fabian and Zhou, Yuyin and Shen, Liyue and Zamzmi, Ghada and Mugambi, Purity and Zirikly, Ayah and McDermott, Matthew B. A. and Alsentzer, Emily}, volume = {158}, series = {Proceedings of Machine Learning Research}, month = {04 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v158/qendro21a/qendro21a.pdf}, url = {https://proceedings.mlr.press/v158/qendro21a.html}, abstract = {Deep learning is increasingly used for decision-making in health applications. However, commonly used deep learning models are deterministic and are unable to provide any estimate of predictive uncertainty. Quantifying model uncertainty is crucial for reducing the risk of misdiagnosis by informing practitioners of low-confident predictions. To address this issue, we propose early exit ensembles, a novel framework capable of capturing predictive uncertainty via an implicit ensemble of early exits. We evaluate our approach on the task of classification using three state-of-the-art deep learning architectures applied to three medical imaging datasets. Our experiments show that early exit ensembles provide better-calibrated uncertainty compared to Monte Carlo dropout and deep ensembles using just a single forward-pass of the model. Depending on the dataset and baseline, early exit ensembles can improve uncertainty metrics up to 2x, while increasing accuracy by up to 2% over its single model counterpart. Finally, our results suggest that by providing well-calibrated predictive uncertainty for both in- and out-of-distribution inputs, early exit ensembles have the potential to improve trustworthiness of models in high-risk medical decision-making.} }
Endnote
%0 Conference Paper %T Early Exit Ensembles for Uncertainty Quantification %A Lorena Qendro %A Alexander Campbell %A Pietro Lio %A Cecilia Mascolo %B Proceedings of Machine Learning for Health %C Proceedings of Machine Learning Research %D 2021 %E Subhrajit Roy %E Stephen Pfohl %E Emma Rocheteau %E Girmaw Abebe Tadesse %E Luis Oala %E Fabian Falck %E Yuyin Zhou %E Liyue Shen %E Ghada Zamzmi %E Purity Mugambi %E Ayah Zirikly %E Matthew B. A. McDermott %E Emily Alsentzer %F pmlr-v158-qendro21a %I PMLR %P 181--195 %U https://proceedings.mlr.press/v158/qendro21a.html %V 158 %X Deep learning is increasingly used for decision-making in health applications. However, commonly used deep learning models are deterministic and are unable to provide any estimate of predictive uncertainty. Quantifying model uncertainty is crucial for reducing the risk of misdiagnosis by informing practitioners of low-confident predictions. To address this issue, we propose early exit ensembles, a novel framework capable of capturing predictive uncertainty via an implicit ensemble of early exits. We evaluate our approach on the task of classification using three state-of-the-art deep learning architectures applied to three medical imaging datasets. Our experiments show that early exit ensembles provide better-calibrated uncertainty compared to Monte Carlo dropout and deep ensembles using just a single forward-pass of the model. Depending on the dataset and baseline, early exit ensembles can improve uncertainty metrics up to 2x, while increasing accuracy by up to 2% over its single model counterpart. Finally, our results suggest that by providing well-calibrated predictive uncertainty for both in- and out-of-distribution inputs, early exit ensembles have the potential to improve trustworthiness of models in high-risk medical decision-making.
APA
Qendro, L., Campbell, A., Lio, P. & Mascolo, C.. (2021). Early Exit Ensembles for Uncertainty Quantification. Proceedings of Machine Learning for Health, in Proceedings of Machine Learning Research 158:181-195 Available from https://proceedings.mlr.press/v158/qendro21a.html.

Related Material