Combining Causal Models for More Accurate Abstractions of Neural Networks

Theodora-Mara Pîslar, Sara Magliacane, Atticus Geiger
Proceedings of the Fourth Conference on Causal Learning and Reasoning, PMLR 275:114-138, 2025.

Abstract

Mechanistic interpretability aims to reverse engineer neural networks by uncovering which high-level algorithms they implement. Causal abstraction provides a precise notion of when a network implements an algorithm, i.e., a causal model of the network contains low-level features that realize the high-level variables in a causal model of the algorithm (Geiger et al., 2024). A typical problem in practical settings is that the algorithm is not an entirely faithful abstraction of the network, i.e., it only partially captures true reasoning process of a model. We propose a solution where we combine different simple high-level models to produce a more faithful representation of the network. Through learning this combination, we can model neural networks as being in different computational states depending on the input provided, which we show is more accurate to GPT-2 small fine-tuned on two toy tasks. We observe a trade off between the strength of an interpretability hypothesis, which we define in terms of the number of inputs explained by the high-level models, and its faithfulness, which we define as the interchange intervention accuracy. Our method allows us to modulate between the two, providing the most accurate combination of models that describe the behavior of a neural network given a faithfulness level.

Cite this Paper


BibTeX
@InProceedings{pmlr-v275-pislar25a, title = {Combining Causal Models for More Accurate Abstractions of Neural Networks}, author = {P\^{i}slar, Theodora-Mara and Magliacane, Sara and Geiger, Atticus}, booktitle = {Proceedings of the Fourth Conference on Causal Learning and Reasoning}, pages = {114--138}, year = {2025}, editor = {Huang, Biwei and Drton, Mathias}, volume = {275}, series = {Proceedings of Machine Learning Research}, month = {07--09 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v275/main/assets/pislar25a/pislar25a.pdf}, url = {https://proceedings.mlr.press/v275/pislar25a.html}, abstract = {Mechanistic interpretability aims to reverse engineer neural networks by uncovering which high-level algorithms they implement. Causal abstraction provides a precise notion of when a network implements an algorithm, i.e., a causal model of the network contains low-level features that realize the high-level variables in a causal model of the algorithm (Geiger et al., 2024). A typical problem in practical settings is that the algorithm is not an entirely faithful abstraction of the network, i.e., it only partially captures true reasoning process of a model. We propose a solution where we combine different simple high-level models to produce a more faithful representation of the network. Through learning this combination, we can model neural networks as being in different computational states depending on the input provided, which we show is more accurate to GPT-2 small fine-tuned on two toy tasks. We observe a trade off between the strength of an interpretability hypothesis, which we define in terms of the number of inputs explained by the high-level models, and its faithfulness, which we define as the interchange intervention accuracy. Our method allows us to modulate between the two, providing the most accurate combination of models that describe the behavior of a neural network given a faithfulness level.} }
Endnote
%0 Conference Paper %T Combining Causal Models for More Accurate Abstractions of Neural Networks %A Theodora-Mara Pîslar %A Sara Magliacane %A Atticus Geiger %B Proceedings of the Fourth Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Biwei Huang %E Mathias Drton %F pmlr-v275-pislar25a %I PMLR %P 114--138 %U https://proceedings.mlr.press/v275/pislar25a.html %V 275 %X Mechanistic interpretability aims to reverse engineer neural networks by uncovering which high-level algorithms they implement. Causal abstraction provides a precise notion of when a network implements an algorithm, i.e., a causal model of the network contains low-level features that realize the high-level variables in a causal model of the algorithm (Geiger et al., 2024). A typical problem in practical settings is that the algorithm is not an entirely faithful abstraction of the network, i.e., it only partially captures true reasoning process of a model. We propose a solution where we combine different simple high-level models to produce a more faithful representation of the network. Through learning this combination, we can model neural networks as being in different computational states depending on the input provided, which we show is more accurate to GPT-2 small fine-tuned on two toy tasks. We observe a trade off between the strength of an interpretability hypothesis, which we define in terms of the number of inputs explained by the high-level models, and its faithfulness, which we define as the interchange intervention accuracy. Our method allows us to modulate between the two, providing the most accurate combination of models that describe the behavior of a neural network given a faithfulness level.
APA
Pîslar, T., Magliacane, S. & Geiger, A.. (2025). Combining Causal Models for More Accurate Abstractions of Neural Networks. Proceedings of the Fourth Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 275:114-138 Available from https://proceedings.mlr.press/v275/pislar25a.html.

Related Material