Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience

Martina G. Vilas, Federico Adolfi, David Poeppel, Gemma Roig
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:49506-49522, 2024.

Abstract

Inner Interpretability is a promising emerging field tasked with uncovering the inner mechanisms of AI systems, though how to develop these mechanistic theories is still much debated. Moreover, recent critiques raise issues that question its usefulness to advance the broader goals of AI. However, it has been overlooked that these issues resemble those that have been grappled with in another field: Cognitive Neuroscience. Here we draw the relevant connections and highlight lessons that can be transferred productively between fields. Based on these, we propose a general conceptual framework and give concrete methodological strategies for building mechanistic explanations in AI inner interpretability research. With this conceptual framework, Inner Interpretability can fend off critiques and position itself on a productive path to explain AI systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-vilas24a, title = {Position: An Inner Interpretability Framework for {AI} Inspired by Lessons from Cognitive Neuroscience}, author = {Vilas, Martina G. and Adolfi, Federico and Poeppel, David and Roig, Gemma}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {49506--49522}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/vilas24a/vilas24a.pdf}, url = {https://proceedings.mlr.press/v235/vilas24a.html}, abstract = {Inner Interpretability is a promising emerging field tasked with uncovering the inner mechanisms of AI systems, though how to develop these mechanistic theories is still much debated. Moreover, recent critiques raise issues that question its usefulness to advance the broader goals of AI. However, it has been overlooked that these issues resemble those that have been grappled with in another field: Cognitive Neuroscience. Here we draw the relevant connections and highlight lessons that can be transferred productively between fields. Based on these, we propose a general conceptual framework and give concrete methodological strategies for building mechanistic explanations in AI inner interpretability research. With this conceptual framework, Inner Interpretability can fend off critiques and position itself on a productive path to explain AI systems.} }
Endnote
%0 Conference Paper %T Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience %A Martina G. Vilas %A Federico Adolfi %A David Poeppel %A Gemma Roig %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-vilas24a %I PMLR %P 49506--49522 %U https://proceedings.mlr.press/v235/vilas24a.html %V 235 %X Inner Interpretability is a promising emerging field tasked with uncovering the inner mechanisms of AI systems, though how to develop these mechanistic theories is still much debated. Moreover, recent critiques raise issues that question its usefulness to advance the broader goals of AI. However, it has been overlooked that these issues resemble those that have been grappled with in another field: Cognitive Neuroscience. Here we draw the relevant connections and highlight lessons that can be transferred productively between fields. Based on these, we propose a general conceptual framework and give concrete methodological strategies for building mechanistic explanations in AI inner interpretability research. With this conceptual framework, Inner Interpretability can fend off critiques and position itself on a productive path to explain AI systems.
APA
Vilas, M.G., Adolfi, F., Poeppel, D. & Roig, G.. (2024). Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:49506-49522 Available from https://proceedings.mlr.press/v235/vilas24a.html.

Related Material