[edit]
Architecture-Aware Explainability in ECG Analysis: A Case Study of Aortic Stenosis Detection with ResNet18, LSTM and ViT-MAE ECG
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:67-75, 2026.
Abstract
Aortic stenosis (AS) remains a major cardiovascular challenge, as early diagnostic markers in electrocardiogram (ECG) signals are often subtle and difficult to identify using conventional approaches. Although deep learning models have demonstrated strong performance in AS detection, their clinical adoption is limited by the insufficient interpretability of model decisions. Existing explainability studies typically focus on individual architectures, leaving open the question of whether different model designs rely on distinct ECG features. In this work, we investigate how the architecture of neural networks influences the explainability and clinical interpretability in the classification of AS based on ECG. We systematically compare three architectures, namely ResNet18, Long Short-Term Memory (LSTM), and a Vision Transformer with Masked Autoencoder (ViT-MAE),trained in the open-access Cardio-mechanical Signals database comprising 100 patients with valvular heart diseases. All models achieved strong predictive performance, with accuracies of 97.23% (ResNet18), 98.96% (LSTM), and 88.56% (ViT-MAE). To analyze model behavior, we apply both Integrated Gradients and Local Interpretable Model-agnostic Explanations (LIME). The results reveal architecture-specific attribution patterns: ResNet18 exhibits a broad attention across P-waves, QRS complexes, and ST–T segments; LSTM emphasizes temporally salient QRS related features; and ViT-MAE prioritizes repolarization associated regions, including T-waves and QT intervals. Despite these differences, all architectures consistently focus on clinically meaningful ECG regions associated with AS pathophysiology. These findings demonstrate that explainability outcomes are strongly influenced by model architecture and underscore the importance of architecture-aware interpretability strategies for building transparent, reliable and clinically trustworthy AI systems for cardiovascular diagnosis.