Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity

Wanjin Feng, Xingyu Gao, Wenqian Du, Hailong Shi, Peilin Zhao, Pengcheng Wu, Chunyan Miao
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:16711-16726, 2025.

Abstract

Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our approach. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-duration simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-feng25e, title = {Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity}, author = {Feng, Wanjin and Gao, Xingyu and Du, Wenqian and Shi, Hailong and Zhao, Peilin and Wu, Pengcheng and Miao, Chunyan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {16711--16726}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/feng25e/feng25e.pdf}, url = {https://proceedings.mlr.press/v267/feng25e.html}, abstract = {Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our approach. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-duration simulations.} }
Endnote
%0 Conference Paper %T Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity %A Wanjin Feng %A Xingyu Gao %A Wenqian Du %A Hailong Shi %A Peilin Zhao %A Pengcheng Wu %A Chunyan Miao %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-feng25e %I PMLR %P 16711--16726 %U https://proceedings.mlr.press/v267/feng25e.html %V 267 %X Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our approach. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-duration simulations.
APA
Feng, W., Gao, X., Du, W., Shi, H., Zhao, P., Wu, P. & Miao, C.. (2025). Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:16711-16726 Available from https://proceedings.mlr.press/v267/feng25e.html.

Related Material