A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations

Bilal Chughtai, Lawrence Chan, Neel Nanda
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6243-6267, 2023.

Abstract

Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small networks learn to implement group compositions. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that these networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks trained on various groups and architectures, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned – as well as the order they develop – are arbitrary.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-chughtai23a, title = {A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations}, author = {Chughtai, Bilal and Chan, Lawrence and Nanda, Neel}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {6243--6267}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/chughtai23a/chughtai23a.pdf}, url = {https://proceedings.mlr.press/v202/chughtai23a.html}, abstract = {Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small networks learn to implement group compositions. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that these networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks trained on various groups and architectures, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned – as well as the order they develop – are arbitrary.} }
Endnote
%0 Conference Paper %T A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations %A Bilal Chughtai %A Lawrence Chan %A Neel Nanda %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-chughtai23a %I PMLR %P 6243--6267 %U https://proceedings.mlr.press/v202/chughtai23a.html %V 202 %X Universality is a key hypothesis in mechanistic interpretability – that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small networks learn to implement group compositions. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that these networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks trained on various groups and architectures, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned – as well as the order they develop – are arbitrary.
APA
Chughtai, B., Chan, L. & Nanda, N.. (2023). A Toy Model of Universality: Reverse Engineering how Networks Learn Group Operations. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:6243-6267 Available from https://proceedings.mlr.press/v202/chughtai23a.html.

Related Material