Faster Optimization on Sparse Graphs via Neural Reparametrization

Csaba Both, Nima Dehmamy, Jianzhi Long, Rose Yu
Proceedings of the Third Learning on Graphs Conference, PMLR 269:26:1-26:21, 2025.

Abstract

Many scientific problems involve energy minimization on sparse graphs, including such as heat diffusion in solids and synchronization. These problems often experience slow convergence, particularly when the graph is random, as seen in glassy systems. We show that graph neural networks (GNN) can be used to significantly speed up such optimization problems. Our idea is to represent the state of each node as the output of a graph neural network. We show the benefit of this GNN reparametrization in experiments solving heat diffusion and synchronization of nonlinear oscillators. When optimizing using gradient descent, we show that this GNN reparametrization has the effect of a quasi-Newton’s method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v269-both25a, title = {Faster Optimization on Sparse Graphs via Neural Reparametrization}, author = {Both, Csaba and Dehmamy, Nima and Long, Jianzhi and Yu, Rose}, booktitle = {Proceedings of the Third Learning on Graphs Conference}, pages = {26:1--26:21}, year = {2025}, editor = {Wolf, Guy and Krishnaswamy, Smita}, volume = {269}, series = {Proceedings of Machine Learning Research}, month = {26--29 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v269/main/assets/both25a/both25a.pdf}, url = {https://proceedings.mlr.press/v269/both25a.html}, abstract = {Many scientific problems involve energy minimization on sparse graphs, including such as heat diffusion in solids and synchronization. These problems often experience slow convergence, particularly when the graph is random, as seen in glassy systems. We show that graph neural networks (GNN) can be used to significantly speed up such optimization problems. Our idea is to represent the state of each node as the output of a graph neural network. We show the benefit of this GNN reparametrization in experiments solving heat diffusion and synchronization of nonlinear oscillators. When optimizing using gradient descent, we show that this GNN reparametrization has the effect of a quasi-Newton’s method.} }
Endnote
%0 Conference Paper %T Faster Optimization on Sparse Graphs via Neural Reparametrization %A Csaba Both %A Nima Dehmamy %A Jianzhi Long %A Rose Yu %B Proceedings of the Third Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2025 %E Guy Wolf %E Smita Krishnaswamy %F pmlr-v269-both25a %I PMLR %P 26:1--26:21 %U https://proceedings.mlr.press/v269/both25a.html %V 269 %X Many scientific problems involve energy minimization on sparse graphs, including such as heat diffusion in solids and synchronization. These problems often experience slow convergence, particularly when the graph is random, as seen in glassy systems. We show that graph neural networks (GNN) can be used to significantly speed up such optimization problems. Our idea is to represent the state of each node as the output of a graph neural network. We show the benefit of this GNN reparametrization in experiments solving heat diffusion and synchronization of nonlinear oscillators. When optimizing using gradient descent, we show that this GNN reparametrization has the effect of a quasi-Newton’s method.
APA
Both, C., Dehmamy, N., Long, J. & Yu, R.. (2025). Faster Optimization on Sparse Graphs via Neural Reparametrization. Proceedings of the Third Learning on Graphs Conference, in Proceedings of Machine Learning Research 269:26:1-26:21 Available from https://proceedings.mlr.press/v269/both25a.html.

Related Material