Weight Entropy-Maximised Evidential Metamodel for Post Hoc Uncertainty

Gouranga Bala, Dhruvi Ganatra, Amit Sethi
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:54-59, 2026.

Abstract

Reliable uncertainty quantification (UQ) is crucial for deploying deep learning models in safety-critical domains such as medical imaging. Existing post hoc UQ methods either rely on multi-pass inference or suffer from limited expressiveness due to their dependence on final-layer embeddings. In this work, we propose evidential meta model, a lightweight post-hoc framework that enhances Dirichlet evidential modeling by extracting features from multiple layers of a frozen classifier. This multilayer strategy enriches the metamodel input with both low-level textures and high-level semantics, enabling more accurate modeling of aleatoric and epistemic uncertainty. To further boost epistemic fidelity, we incorporate Max-WEnt regularization, which maximizes the entropy of learnable scaling weights applied within the meta-model. This promotes internal hypothesis diversity without modifying the base network or incurring test-time overhead. Across seven benchmarks including medical datasets (BACH, DIV2K, HAM10000, BreakHIS) and natural image tasks (SVHN, Fashion-MNIST, ImageNet-C) our evidential metamodel consistently improves AUROC and calibration over both the base model and prior post-hoc UQ methods. Ablation studies confirm the complementary benefits of multilayer features and Max-WEnt. Our approach offers a robust and efficient solution for trustworthy AI in clinical and other high stakes settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v317-bala26a, title = {Weight Entropy-Maximised Evidential Metamodel for Post Hoc Uncertainty}, author = {Bala, Gouranga and Ganatra, Dhruvi and Sethi, Amit}, booktitle = {Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare}, pages = {54--59}, year = {2026}, editor = {Wu, Junde and Pan, Jiazhen and Zhu, Jiayuan and Luo, Luyang and Li, Yitong and Xu, Min and Jin, Yueming and Rueckert, Daniel}, volume = {317}, series = {Proceedings of Machine Learning Research}, month = {20--21 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v317/main/assets/bala26a/bala26a.pdf}, url = {https://proceedings.mlr.press/v317/bala26a.html}, abstract = {Reliable uncertainty quantification (UQ) is crucial for deploying deep learning models in safety-critical domains such as medical imaging. Existing post hoc UQ methods either rely on multi-pass inference or suffer from limited expressiveness due to their dependence on final-layer embeddings. In this work, we propose evidential meta model, a lightweight post-hoc framework that enhances Dirichlet evidential modeling by extracting features from multiple layers of a frozen classifier. This multilayer strategy enriches the metamodel input with both low-level textures and high-level semantics, enabling more accurate modeling of aleatoric and epistemic uncertainty. To further boost epistemic fidelity, we incorporate Max-WEnt regularization, which maximizes the entropy of learnable scaling weights applied within the meta-model. This promotes internal hypothesis diversity without modifying the base network or incurring test-time overhead. Across seven benchmarks including medical datasets (BACH, DIV2K, HAM10000, BreakHIS) and natural image tasks (SVHN, Fashion-MNIST, ImageNet-C) our evidential metamodel consistently improves AUROC and calibration over both the base model and prior post-hoc UQ methods. Ablation studies confirm the complementary benefits of multilayer features and Max-WEnt. Our approach offers a robust and efficient solution for trustworthy AI in clinical and other high stakes settings.} }
Endnote
%0 Conference Paper %T Weight Entropy-Maximised Evidential Metamodel for Post Hoc Uncertainty %A Gouranga Bala %A Dhruvi Ganatra %A Amit Sethi %B Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare %C Proceedings of Machine Learning Research %D 2026 %E Junde Wu %E Jiazhen Pan %E Jiayuan Zhu %E Luyang Luo %E Yitong Li %E Min Xu %E Yueming Jin %E Daniel Rueckert %F pmlr-v317-bala26a %I PMLR %P 54--59 %U https://proceedings.mlr.press/v317/bala26a.html %V 317 %X Reliable uncertainty quantification (UQ) is crucial for deploying deep learning models in safety-critical domains such as medical imaging. Existing post hoc UQ methods either rely on multi-pass inference or suffer from limited expressiveness due to their dependence on final-layer embeddings. In this work, we propose evidential meta model, a lightweight post-hoc framework that enhances Dirichlet evidential modeling by extracting features from multiple layers of a frozen classifier. This multilayer strategy enriches the metamodel input with both low-level textures and high-level semantics, enabling more accurate modeling of aleatoric and epistemic uncertainty. To further boost epistemic fidelity, we incorporate Max-WEnt regularization, which maximizes the entropy of learnable scaling weights applied within the meta-model. This promotes internal hypothesis diversity without modifying the base network or incurring test-time overhead. Across seven benchmarks including medical datasets (BACH, DIV2K, HAM10000, BreakHIS) and natural image tasks (SVHN, Fashion-MNIST, ImageNet-C) our evidential metamodel consistently improves AUROC and calibration over both the base model and prior post-hoc UQ methods. Ablation studies confirm the complementary benefits of multilayer features and Max-WEnt. Our approach offers a robust and efficient solution for trustworthy AI in clinical and other high stakes settings.
APA
Bala, G., Ganatra, D. & Sethi, A.. (2026). Weight Entropy-Maximised Evidential Metamodel for Post Hoc Uncertainty. Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, in Proceedings of Machine Learning Research 317:54-59 Available from https://proceedings.mlr.press/v317/bala26a.html.

Related Material