[edit]
Monte Carlo ExtremalMask: Uncertainty Aware Time Series Model Interpretability For Critical Care Applications
Proceedings of the 10th Machine Learning for Healthcare Conference, PMLR 298, 2025.
Abstract
Model interpretability for biomedical time-series contexts (e.g., critical care medicine) remains a significant challenge where interactions between pathophysiological signals obscure clinical interpretations. Traditional feature-time attribution methods for time series generate static, deterministic saliency masks, which fail to account for the temporal uncertainty and probabilistic nature of model-inferred feature importance in dynamic physiological systems such as acute organ failure. We address this limitation by proposing a probabilistic framework leveraging Monte Carlo Dropout to quantify model-centric epistemic uncertainty in attribution masks. We capture the stochastic variability through iterative sampling, though the inherent randomness introduces inconsistency in mask outputs across sampling iterations. We implement a dual optimization strategy incorporating entropy minimization and spatiotemporal variance regularization during training to ensure the convergence of attribution masks toward higher informativeness and lower entropy while preserving uncertainty quantification. This approach provides a systematic way to prioritize feature-time pairs by balancing high attribution scores with low uncertainty estimates, enabling end users to discover clinical biomarkers for time-dependent pathophysiological deterioration of patient state. Our work advances the field of healthcare machine learning by formalizing uncertainty-aware interpretability for temporal models while bridging the gap between probabilistic attributions and clinically actionable interpretations for problems in critical care.