KGAccel: A Domain-Specific Reconfigurable Accelerator for Knowledge Graph Reasoning

Hanning Chen, Ali Zakeri, Yang Ni, Fei Wen, Behnam Khaleghi, Hugo Latapie, Alvaro Velasquez, Mohsen Imani
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:424-445, 2025.

Abstract

Recent hardware accelerators for graph learning have largely overlooked knowledge graph reasoning (KGR), which demands more complex models and longer training times than typical graph tasks. Existing approaches rely on single or distributed GPUs to accelerate translational embedding models, but these general-purpose solutions lag in handling reinforcement learning-based KGR. To address this gap, we introduce KGAccel, the first domain-specific accelerator for RL-based KGR on FPGA. We develop a knowledge-graph compression method and propose a resource-aware mechanism that enables high-speed training even on smaller FPGAs. KGAccel achieves up to 65x speedup over CPU, 8x over GPU, and over 30x higher energy efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-chen25a, title = {KGAccel: A Domain-Specific Reconfigurable Accelerator for Knowledge Graph Reasoning}, author = {Chen, Hanning and Zakeri, Ali and Ni, Yang and Wen, Fei and Khaleghi, Behnam and Latapie, Hugo and Velasquez, Alvaro and Imani, Mohsen}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {424--445}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/chen25a/chen25a.pdf}, url = {https://proceedings.mlr.press/v288/chen25a.html}, abstract = {Recent hardware accelerators for graph learning have largely overlooked knowledge graph reasoning (KGR), which demands more complex models and longer training times than typical graph tasks. Existing approaches rely on single or distributed GPUs to accelerate translational embedding models, but these general-purpose solutions lag in handling reinforcement learning-based KGR. To address this gap, we introduce KGAccel, the first domain-specific accelerator for RL-based KGR on FPGA. We develop a knowledge-graph compression method and propose a resource-aware mechanism that enables high-speed training even on smaller FPGAs. KGAccel achieves up to 65x speedup over CPU, 8x over GPU, and over 30x higher energy efficiency.} }
Endnote
%0 Conference Paper %T KGAccel: A Domain-Specific Reconfigurable Accelerator for Knowledge Graph Reasoning %A Hanning Chen %A Ali Zakeri %A Yang Ni %A Fei Wen %A Behnam Khaleghi %A Hugo Latapie %A Alvaro Velasquez %A Mohsen Imani %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-chen25a %I PMLR %P 424--445 %U https://proceedings.mlr.press/v288/chen25a.html %V 288 %X Recent hardware accelerators for graph learning have largely overlooked knowledge graph reasoning (KGR), which demands more complex models and longer training times than typical graph tasks. Existing approaches rely on single or distributed GPUs to accelerate translational embedding models, but these general-purpose solutions lag in handling reinforcement learning-based KGR. To address this gap, we introduce KGAccel, the first domain-specific accelerator for RL-based KGR on FPGA. We develop a knowledge-graph compression method and propose a resource-aware mechanism that enables high-speed training even on smaller FPGAs. KGAccel achieves up to 65x speedup over CPU, 8x over GPU, and over 30x higher energy efficiency.
APA
Chen, H., Zakeri, A., Ni, Y., Wen, F., Khaleghi, B., Latapie, H., Velasquez, A. & Imani, M.. (2025). KGAccel: A Domain-Specific Reconfigurable Accelerator for Knowledge Graph Reasoning. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:424-445 Available from https://proceedings.mlr.press/v288/chen25a.html.

Related Material