Training a General Spiking Neural Network with Improved Efficiency and Minimum Latency

Yunpeng Yao, Man Wu, Zheng Chen, Renyuan Zhang
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1558-1573, 2024.

Abstract

Spiking Neural Networks (SNNs) that operate in an event-driven manner and employ binary spike representation have recently emerged as promising candidates for energy-efficient computing. However, a cost bottleneck arises in obtaining high-performance SNNs: training a SNN model requires a large number of time steps in addition to the usual learning iterations, hence this limits their energy efficiency. This paper proposes a general training framework that enhances feature learning and activation efficiency within a limited time step, providing a new solution for more energy-efficient SNNs. Our framework allows SNN neurons to learn robust spike feature from different receptive fields and update neuron states by utilizing both current stimuli and recurrence information transmitted from other neurons. This setting continuously complements information within a single time step. Additionally, we propose a projection function to merge these two stimuli to smoothly optimize neuron weights (spike firing threshold and activation). We evaluate the proposal for both convolution and recurrent models. Our experimental results indicate state-of-the-art visual classification tasks, including CIFAR10, CIFAR100, and TinyImageNet. Our framework achieves 72.41% and 72.31% top-1 accuracy with only 1 time step on CIFAR100 for CNNs and RNNs, respectively. Our method reduces 10X and 3X joule energy than a standard ANN and SNN, respectively, on CIFAR10, without additional time steps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-yao24b, title = {Training a General Spiking Neural Network with Improved Efficiency and Minimum Latency}, author = {Yao, Yunpeng and Wu, Man and Chen, Zheng and Zhang, Renyuan}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1558--1573}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/yao24b/yao24b.pdf}, url = {https://proceedings.mlr.press/v222/yao24b.html}, abstract = {Spiking Neural Networks (SNNs) that operate in an event-driven manner and employ binary spike representation have recently emerged as promising candidates for energy-efficient computing. However, a cost bottleneck arises in obtaining high-performance SNNs: training a SNN model requires a large number of time steps in addition to the usual learning iterations, hence this limits their energy efficiency. This paper proposes a general training framework that enhances feature learning and activation efficiency within a limited time step, providing a new solution for more energy-efficient SNNs. Our framework allows SNN neurons to learn robust spike feature from different receptive fields and update neuron states by utilizing both current stimuli and recurrence information transmitted from other neurons. This setting continuously complements information within a single time step. Additionally, we propose a projection function to merge these two stimuli to smoothly optimize neuron weights (spike firing threshold and activation). We evaluate the proposal for both convolution and recurrent models. Our experimental results indicate state-of-the-art visual classification tasks, including CIFAR10, CIFAR100, and TinyImageNet. Our framework achieves 72.41% and 72.31% top-1 accuracy with only 1 time step on CIFAR100 for CNNs and RNNs, respectively. Our method reduces 10X and 3X joule energy than a standard ANN and SNN, respectively, on CIFAR10, without additional time steps.} }
Endnote
%0 Conference Paper %T Training a General Spiking Neural Network with Improved Efficiency and Minimum Latency %A Yunpeng Yao %A Man Wu %A Zheng Chen %A Renyuan Zhang %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-yao24b %I PMLR %P 1558--1573 %U https://proceedings.mlr.press/v222/yao24b.html %V 222 %X Spiking Neural Networks (SNNs) that operate in an event-driven manner and employ binary spike representation have recently emerged as promising candidates for energy-efficient computing. However, a cost bottleneck arises in obtaining high-performance SNNs: training a SNN model requires a large number of time steps in addition to the usual learning iterations, hence this limits their energy efficiency. This paper proposes a general training framework that enhances feature learning and activation efficiency within a limited time step, providing a new solution for more energy-efficient SNNs. Our framework allows SNN neurons to learn robust spike feature from different receptive fields and update neuron states by utilizing both current stimuli and recurrence information transmitted from other neurons. This setting continuously complements information within a single time step. Additionally, we propose a projection function to merge these two stimuli to smoothly optimize neuron weights (spike firing threshold and activation). We evaluate the proposal for both convolution and recurrent models. Our experimental results indicate state-of-the-art visual classification tasks, including CIFAR10, CIFAR100, and TinyImageNet. Our framework achieves 72.41% and 72.31% top-1 accuracy with only 1 time step on CIFAR100 for CNNs and RNNs, respectively. Our method reduces 10X and 3X joule energy than a standard ANN and SNN, respectively, on CIFAR10, without additional time steps.
APA
Yao, Y., Wu, M., Chen, Z. & Zhang, R.. (2024). Training a General Spiking Neural Network with Improved Efficiency and Minimum Latency. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1558-1573 Available from https://proceedings.mlr.press/v222/yao24b.html.

Related Material