Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time

Duc Anh Nguyen, Ernesto Araya, Adalbert Fono, Gitta Kutyniok
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:45954-45987, 2025.

Abstract

Recent years have seen significant progress in developing spiking neural networks (SNNs) as a potential solution to the energy challenges posed by conventional artificial neural networks (ANNs). However, our theoretical understanding of SNNs remains relatively limited compared to the ever-growing body of literature on ANNs. In this paper, we study a discrete-time model of SNNs based on leaky integrate-and-fire (LIF) neurons, referred to as discrete-time LIF-SNNs, a widely used framework that still lacks solid theoretical foundations. We demonstrate that discrete-time LIF-SNNs realize piecewise constant functions defined on polyhedral regions, and more importantly, we quantify the network size required to approximate continuous functions. Moreover, we investigate the impact of latency (number of time steps) and depth (number of layers) on the complexity of the input space partitioning induced by discrete-time LIF-SNNs. Our analysis highlights the importance of latency and contrasts these networks with ANNs that use piecewise linear activation functions. Finally, we present numerical experiments to support our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-nguyen25a, title = {Time to Spike? {U}nderstanding the Representational Power of Spiking Neural Networks in Discrete Time}, author = {Nguyen, Duc Anh and Araya, Ernesto and Fono, Adalbert and Kutyniok, Gitta}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {45954--45987}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/nguyen25a/nguyen25a.pdf}, url = {https://proceedings.mlr.press/v267/nguyen25a.html}, abstract = {Recent years have seen significant progress in developing spiking neural networks (SNNs) as a potential solution to the energy challenges posed by conventional artificial neural networks (ANNs). However, our theoretical understanding of SNNs remains relatively limited compared to the ever-growing body of literature on ANNs. In this paper, we study a discrete-time model of SNNs based on leaky integrate-and-fire (LIF) neurons, referred to as discrete-time LIF-SNNs, a widely used framework that still lacks solid theoretical foundations. We demonstrate that discrete-time LIF-SNNs realize piecewise constant functions defined on polyhedral regions, and more importantly, we quantify the network size required to approximate continuous functions. Moreover, we investigate the impact of latency (number of time steps) and depth (number of layers) on the complexity of the input space partitioning induced by discrete-time LIF-SNNs. Our analysis highlights the importance of latency and contrasts these networks with ANNs that use piecewise linear activation functions. Finally, we present numerical experiments to support our theoretical findings.} }
Endnote
%0 Conference Paper %T Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time %A Duc Anh Nguyen %A Ernesto Araya %A Adalbert Fono %A Gitta Kutyniok %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-nguyen25a %I PMLR %P 45954--45987 %U https://proceedings.mlr.press/v267/nguyen25a.html %V 267 %X Recent years have seen significant progress in developing spiking neural networks (SNNs) as a potential solution to the energy challenges posed by conventional artificial neural networks (ANNs). However, our theoretical understanding of SNNs remains relatively limited compared to the ever-growing body of literature on ANNs. In this paper, we study a discrete-time model of SNNs based on leaky integrate-and-fire (LIF) neurons, referred to as discrete-time LIF-SNNs, a widely used framework that still lacks solid theoretical foundations. We demonstrate that discrete-time LIF-SNNs realize piecewise constant functions defined on polyhedral regions, and more importantly, we quantify the network size required to approximate continuous functions. Moreover, we investigate the impact of latency (number of time steps) and depth (number of layers) on the complexity of the input space partitioning induced by discrete-time LIF-SNNs. Our analysis highlights the importance of latency and contrasts these networks with ANNs that use piecewise linear activation functions. Finally, we present numerical experiments to support our theoretical findings.
APA
Nguyen, D.A., Araya, E., Fono, A. & Kutyniok, G.. (2025). Time to Spike? Understanding the Representational Power of Spiking Neural Networks in Discrete Time. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:45954-45987 Available from https://proceedings.mlr.press/v267/nguyen25a.html.

Related Material