Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data

Zhong Guan, Likang Wu, Hongke Zhao, Ming He, Jianping Fan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:20612-20639, 2025.

Abstract

Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on topological connections, they fall short compared to message-passing mechanisms on fixed links, such as those employed by Graph Neural Networks (GNNs). This raises a question: “Does attention fail for graphs in natural language settings?” Motivated by these observations, we embarked on an empirical study from the perspective of attention mechanisms to explore how LLMs process graph-structured data. The goal is to gain deeper insights into the attention behavior of LLMs over graph structures. Through a series of experiments, we uncovered unique phenomena regarding how LLMs apply attention to graph-structured data and analyzed these findings to improve the modeling of such data by LLMs. The primary findings of our research are: 1) While LLMs can recognize graph data and capture text-node interactions, they struggle to model inter-node relationships within graph structures due to inherent architectural constraints. 2) The attention distribution of LLMs across graph nodes does not align with ideal structural patterns, indicating a failure to adapt to graph topology nuances. 3) Neither fully connected attention (as in LLMs) nor fixed connectivity (as in GNNs) is optimal; each has specific limitations in its application scenarios. Instead, intermediate-state attention windows improve LLM training performance and seamlessly transition to fully connected windows during inference. Source code: https://anonymous.4open.science/r/LLM_exploration-B21F

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-guan25e, title = {Attention Mechanisms Perspective: Exploring {LLM} Processing of Graph-Structured Data}, author = {Guan, Zhong and Wu, Likang and Zhao, Hongke and He, Ming and Fan, Jianping}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {20612--20639}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/guan25e/guan25e.pdf}, url = {https://proceedings.mlr.press/v267/guan25e.html}, abstract = {Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on topological connections, they fall short compared to message-passing mechanisms on fixed links, such as those employed by Graph Neural Networks (GNNs). This raises a question: “Does attention fail for graphs in natural language settings?” Motivated by these observations, we embarked on an empirical study from the perspective of attention mechanisms to explore how LLMs process graph-structured data. The goal is to gain deeper insights into the attention behavior of LLMs over graph structures. Through a series of experiments, we uncovered unique phenomena regarding how LLMs apply attention to graph-structured data and analyzed these findings to improve the modeling of such data by LLMs. The primary findings of our research are: 1) While LLMs can recognize graph data and capture text-node interactions, they struggle to model inter-node relationships within graph structures due to inherent architectural constraints. 2) The attention distribution of LLMs across graph nodes does not align with ideal structural patterns, indicating a failure to adapt to graph topology nuances. 3) Neither fully connected attention (as in LLMs) nor fixed connectivity (as in GNNs) is optimal; each has specific limitations in its application scenarios. Instead, intermediate-state attention windows improve LLM training performance and seamlessly transition to fully connected windows during inference. Source code: https://anonymous.4open.science/r/LLM_exploration-B21F} }
Endnote
%0 Conference Paper %T Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data %A Zhong Guan %A Likang Wu %A Hongke Zhao %A Ming He %A Jianping Fan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-guan25e %I PMLR %P 20612--20639 %U https://proceedings.mlr.press/v267/guan25e.html %V 267 %X Attention mechanisms are critical to the success of large language models (LLMs), driving significant advancements in multiple fields. However, for graph-structured data, which requires emphasis on topological connections, they fall short compared to message-passing mechanisms on fixed links, such as those employed by Graph Neural Networks (GNNs). This raises a question: “Does attention fail for graphs in natural language settings?” Motivated by these observations, we embarked on an empirical study from the perspective of attention mechanisms to explore how LLMs process graph-structured data. The goal is to gain deeper insights into the attention behavior of LLMs over graph structures. Through a series of experiments, we uncovered unique phenomena regarding how LLMs apply attention to graph-structured data and analyzed these findings to improve the modeling of such data by LLMs. The primary findings of our research are: 1) While LLMs can recognize graph data and capture text-node interactions, they struggle to model inter-node relationships within graph structures due to inherent architectural constraints. 2) The attention distribution of LLMs across graph nodes does not align with ideal structural patterns, indicating a failure to adapt to graph topology nuances. 3) Neither fully connected attention (as in LLMs) nor fixed connectivity (as in GNNs) is optimal; each has specific limitations in its application scenarios. Instead, intermediate-state attention windows improve LLM training performance and seamlessly transition to fully connected windows during inference. Source code: https://anonymous.4open.science/r/LLM_exploration-B21F
APA
Guan, Z., Wu, L., Zhao, H., He, M. & Fan, J.. (2025). Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:20612-20639 Available from https://proceedings.mlr.press/v267/guan25e.html.

Related Material