[edit]
Weight Entropy-Maximised Evidential Metamodel for Post Hoc Uncertainty
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:54-59, 2026.
Abstract
Reliable uncertainty quantification (UQ) is crucial for deploying deep learning models in safety-critical domains such as medical imaging. Existing post hoc UQ methods either rely on multi-pass inference or suffer from limited expressiveness due to their dependence on final-layer embeddings. In this work, we propose evidential meta model, a lightweight post-hoc framework that enhances Dirichlet evidential modeling by extracting features from multiple layers of a frozen classifier. This multilayer strategy enriches the metamodel input with both low-level textures and high-level semantics, enabling more accurate modeling of aleatoric and epistemic uncertainty. To further boost epistemic fidelity, we incorporate Max-WEnt regularization, which maximizes the entropy of learnable scaling weights applied within the meta-model. This promotes internal hypothesis diversity without modifying the base network or incurring test-time overhead. Across seven benchmarks including medical datasets (BACH, DIV2K, HAM10000, BreakHIS) and natural image tasks (SVHN, Fashion-MNIST, ImageNet-C) our evidential metamodel consistently improves AUROC and calibration over both the base model and prior post-hoc UQ methods. Ablation studies confirm the complementary benefits of multilayer features and Max-WEnt. Our approach offers a robust and efficient solution for trustworthy AI in clinical and other high stakes settings.