DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction

John Wu, David Wu, Jimeng Sun
Proceedings of the 4th Machine Learning for Health Symposium, PMLR 259:1014-1038, 2025.

Abstract

Automated medical coding, a clinical high-dimensional multilabel task, requires explicit interpretability. Existing works often rely on local interpretability methods, failing to provide comprehensive explanations of the overall mechanism behind each label prediction within a multilabel set. We propose a mechanistic interpretability module called DIctionary Label Attention (DILA) that disentangles uninterpretable dense embeddings into a sparse embedding space, where each nonzero element (a dictionary feature) represents a globally learned medical concept. Through human evaluations, we show that our sparse embeddings are more human understandable than its dense counterparts by at least 50 percent. Our automated dictionary feature identification pipeline, leveraging large language models (LLMs), uncovers thousands of learned medical concepts by examining and summarizing the highest activating tokens for each dictionary feature. We represent the relationships between dictionary features and medical codes through a sparse interpretable matrix, enhancing our global understanding of the model’s predictions while maintaining competitive performance and scalability without extensive human annotation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v259-wu25a, title = {DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction}, author = {Wu, John and Wu, David and Sun, Jimeng}, booktitle = {Proceedings of the 4th Machine Learning for Health Symposium}, pages = {1014--1038}, year = {2025}, editor = {Hegselmann, Stefan and Zhou, Helen and Healey, Elizabeth and Chang, Trenton and Ellington, Caleb and Mhasawade, Vishwali and Tonekaboni, Sana and Argaw, Peniel and Zhang, Haoran}, volume = {259}, series = {Proceedings of Machine Learning Research}, month = {15--16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v259/main/assets/wu25a/wu25a.pdf}, url = {https://proceedings.mlr.press/v259/wu25a.html}, abstract = {Automated medical coding, a clinical high-dimensional multilabel task, requires explicit interpretability. Existing works often rely on local interpretability methods, failing to provide comprehensive explanations of the overall mechanism behind each label prediction within a multilabel set. We propose a mechanistic interpretability module called DIctionary Label Attention (DILA) that disentangles uninterpretable dense embeddings into a sparse embedding space, where each nonzero element (a dictionary feature) represents a globally learned medical concept. Through human evaluations, we show that our sparse embeddings are more human understandable than its dense counterparts by at least 50 percent. Our automated dictionary feature identification pipeline, leveraging large language models (LLMs), uncovers thousands of learned medical concepts by examining and summarizing the highest activating tokens for each dictionary feature. We represent the relationships between dictionary features and medical codes through a sparse interpretable matrix, enhancing our global understanding of the model’s predictions while maintaining competitive performance and scalability without extensive human annotation.} }
Endnote
%0 Conference Paper %T DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction %A John Wu %A David Wu %A Jimeng Sun %B Proceedings of the 4th Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2025 %E Stefan Hegselmann %E Helen Zhou %E Elizabeth Healey %E Trenton Chang %E Caleb Ellington %E Vishwali Mhasawade %E Sana Tonekaboni %E Peniel Argaw %E Haoran Zhang %F pmlr-v259-wu25a %I PMLR %P 1014--1038 %U https://proceedings.mlr.press/v259/wu25a.html %V 259 %X Automated medical coding, a clinical high-dimensional multilabel task, requires explicit interpretability. Existing works often rely on local interpretability methods, failing to provide comprehensive explanations of the overall mechanism behind each label prediction within a multilabel set. We propose a mechanistic interpretability module called DIctionary Label Attention (DILA) that disentangles uninterpretable dense embeddings into a sparse embedding space, where each nonzero element (a dictionary feature) represents a globally learned medical concept. Through human evaluations, we show that our sparse embeddings are more human understandable than its dense counterparts by at least 50 percent. Our automated dictionary feature identification pipeline, leveraging large language models (LLMs), uncovers thousands of learned medical concepts by examining and summarizing the highest activating tokens for each dictionary feature. We represent the relationships between dictionary features and medical codes through a sparse interpretable matrix, enhancing our global understanding of the model’s predictions while maintaining competitive performance and scalability without extensive human annotation.
APA
Wu, J., Wu, D. & Sun, J.. (2025). DILA: Dictionary Label Attention for Mechanistic Interpretability in High-dimensional Multi-label Medical Coding Prediction. Proceedings of the 4th Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 259:1014-1038 Available from https://proceedings.mlr.press/v259/wu25a.html.

Related Material