How does GPT-2 Predict Acronyms? Extracting and Understanding a Circuit via Mechanistic Interpretability

Jorge García-Carrasco, Alejandro Maté, Juan Carlos Trujillo
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3322-3330, 2024.

Abstract

Transformer-based language models are treated as black-boxes because of their large number of parameters and complex internal interactions, which is a serious safety concern. Mechanistic Interpretability (MI) intends to reverse-engineer neural network behaviors in terms of human-understandable components. In this work, we focus on understanding how GPT-2 Small performs the task of predicting three-letter acronyms. Previous works in the MI field have focused so far on tasks that predict a single token. To the best of our knowledge, this is the first work that tries to mechanistically understand a behavior involving the prediction of multiple consecutive tokens. We discover that the prediction is performed by a circuit composed of 8 attention heads (${\sim}5%$ of the total heads) which we classified in three groups according to their role. We also demonstrate that these heads concentrate the acronym prediction functionality. In addition, we mechanistically interpret the most relevant heads of the circuit and find out that they use positional information which is propagated via the causal mask mechanism. We expect this work to lay the foundation for understanding more complex behaviors involving multiple-token predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-garcia-carrasco24a, title = { How does {GPT-2} Predict Acronyms? Extracting and Understanding a Circuit via Mechanistic Interpretability }, author = {Garc\'{i}a-Carrasco, Jorge and Mat\'{e}, Alejandro and Carlos Trujillo, Juan}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3322--3330}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/garcia-carrasco24a/garcia-carrasco24a.pdf}, url = {https://proceedings.mlr.press/v238/garcia-carrasco24a.html}, abstract = { Transformer-based language models are treated as black-boxes because of their large number of parameters and complex internal interactions, which is a serious safety concern. Mechanistic Interpretability (MI) intends to reverse-engineer neural network behaviors in terms of human-understandable components. In this work, we focus on understanding how GPT-2 Small performs the task of predicting three-letter acronyms. Previous works in the MI field have focused so far on tasks that predict a single token. To the best of our knowledge, this is the first work that tries to mechanistically understand a behavior involving the prediction of multiple consecutive tokens. We discover that the prediction is performed by a circuit composed of 8 attention heads (${\sim}5%$ of the total heads) which we classified in three groups according to their role. We also demonstrate that these heads concentrate the acronym prediction functionality. In addition, we mechanistically interpret the most relevant heads of the circuit and find out that they use positional information which is propagated via the causal mask mechanism. We expect this work to lay the foundation for understanding more complex behaviors involving multiple-token predictions. } }
Endnote
%0 Conference Paper %T How does GPT-2 Predict Acronyms? Extracting and Understanding a Circuit via Mechanistic Interpretability %A Jorge García-Carrasco %A Alejandro Maté %A Juan Carlos Trujillo %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-garcia-carrasco24a %I PMLR %P 3322--3330 %U https://proceedings.mlr.press/v238/garcia-carrasco24a.html %V 238 %X Transformer-based language models are treated as black-boxes because of their large number of parameters and complex internal interactions, which is a serious safety concern. Mechanistic Interpretability (MI) intends to reverse-engineer neural network behaviors in terms of human-understandable components. In this work, we focus on understanding how GPT-2 Small performs the task of predicting three-letter acronyms. Previous works in the MI field have focused so far on tasks that predict a single token. To the best of our knowledge, this is the first work that tries to mechanistically understand a behavior involving the prediction of multiple consecutive tokens. We discover that the prediction is performed by a circuit composed of 8 attention heads (${\sim}5%$ of the total heads) which we classified in three groups according to their role. We also demonstrate that these heads concentrate the acronym prediction functionality. In addition, we mechanistically interpret the most relevant heads of the circuit and find out that they use positional information which is propagated via the causal mask mechanism. We expect this work to lay the foundation for understanding more complex behaviors involving multiple-token predictions.
APA
García-Carrasco, J., Maté, A. & Carlos Trujillo, J.. (2024). How does GPT-2 Predict Acronyms? Extracting and Understanding a Circuit via Mechanistic Interpretability . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3322-3330 Available from https://proceedings.mlr.press/v238/garcia-carrasco24a.html.

Related Material