On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis

Junyi Guan, Abhijith Sharma, Chong Tian, Salem Lahlou
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:1586-1599, 2025.

Abstract

Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)-a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial Neural Networks (ANNs).

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-guan25a, title = {On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis}, author = {Guan, Junyi and Sharma, Abhijith and Tian, Chong and Lahlou, Salem}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {1586--1599}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/guan25a/guan25a.pdf}, url = {https://proceedings.mlr.press/v286/guan25a.html}, abstract = {Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)-a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial Neural Networks (ANNs).} }
Endnote
%0 Conference Paper %T On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis %A Junyi Guan %A Abhijith Sharma %A Chong Tian %A Salem Lahlou %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-guan25a %I PMLR %P 1586--1599 %U https://proceedings.mlr.press/v286/guan25a.html %V 286 %X Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)-a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial Neural Networks (ANNs).
APA
Guan, J., Sharma, A., Tian, C. & Lahlou, S.. (2025). On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:1586-1599 Available from https://proceedings.mlr.press/v286/guan25a.html.

Related Material