To address the low robustness of Spiking Neural Networks (SNN) against adversarial attacks, a highly robust Spiking Recurrent Neural Network model inspired by biological vision was proposed. This model incorporates the biological mechanisms of the primary visual cortex (V1) and features a convolutional SNN front end designed with biological constraints. Additionally, by integrating feedback connections from the cortical visual information, an SNN back end with an internal recurrent mechanism was constructed. In the absence of adversarial training, this model demonstrates significant improvements in adversarial accuracy of 31.6%, 22.11%, and 20. 99% on the SVHN, CIFAR10, and CIFAR100 datasets, respectively. With adversarial training, the adversarial accuracy improves by 20.64%, 8.79%, and 6.89%, respectively. Furthermore, as the perturbation factor (ε) and the time window (T) increase, the accuracy of this model consistently surpasses that of the baseline model. Experimental results show that the Spiking Recurrent Neural Network model, which incorporates biological vision mechanisms, shows significantly improved accuracy when faced with adversarial attacks, demonstrating enhanced adversarial robustness.
ROYK, JAISWALA, PANDAP. Towards spike-based machine intelligence with neuromorphic computing[J]. Nature, 2019, 575(7784): 607-617.
[2]
CHENGuangyao, PENGPeixi, LIGuoqi, et al. Training full spike neural networks via auxiliary accumulation pathway[EB/OL].(2023-01-26)[2025-03-16].
[3]
ZHUZulun, PENGJiaying, LIJintang, et al. Spiking graph convolutional networks[EB/OL].(2022-05-05)[2025-03-16].
[4]
NAB, MOK J, PARKS, et al. AutoSNN: Towards energy-efficient spiking neural networks[EB/OL].(2022-01-30)[2025-03-16].
[5]
CHENGXiang, HAOYunze, XUJiaming, et al. LISNN: Improving spiking neural networks with lateral interactions for robust object recognition[C]//Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. Yokohama, Japan. International Joint Conferences on Artificial Intelligence Organization, 2020: 1519-1525.
[6]
SHARMINS, RATHIN, PANDAP, et al. Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations[C]//Computer Vision-ECCV 2020, 16th European Conference.Glasgow, UK:[s.n.],2020:399-414.
[7]
EL-ALLAMIR, MARCHISIOA, SHAFIQUEM, et al. Securing deep spiking neural networks against adversarial attacks through inherent structural parameters[C]//Design, Automation & Test in Europe Conference & Exhibition (DATE). Grenoble, France:IEEE, 2021: 774-779.
[8]
KUNDUS, PEDRAMM, BEERELP A. HIRE-SNN: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise[C]//IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada:IEEE, 2021: 5189-5198.
[9]
KIMY, PARKH, MOITRAA, et al. Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks? [C]//IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Singapore:IEEE, 2022: 71-75.
[10]
CHENWeiran, SUNQi, XUQi. Defending spiking neural networks against adversarial attacks through image purification[EB/OL].(2024-04-26)[2025-03-16].
BAITao, LUOJinqi, ZHAOJun, et al. Recent advances in adversarial training for adversarial robustnes[EB/OL].(2021-02-02)[2025-03-16].
[13]
TIANYang, SUNPei. Percolation may explain efficiency, robustness, and economy of the brain[J]. Network Neuroscience, 2022, 6(3): 765-790.
[14]
CHENGuozhang, SCHERRF, MAASSW. A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing[J]. Science Advances, 2022, 8(44): eabq7592.
[15]
DAPELLOJ, MARQUEST, SCHRIMPFM, et al. Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations[J]. Advances in Neural Information Processing Systems, 2020, 33: 13073-13087.
[16]
REDDYM V, BANBURSKIA, PANTN, et al. Biologically inspired mechanisms for adversarial robustnes[J]. Advances in neural information processing systems, 2020, 33: 2135-2146.
KARK, KUBILIUSJ, SCHMIDTK, et al. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior[J]. Nature Neuroscience, 2019, 22(6): 974-983.
KONARD, SARMA ADAS, BHANDARYS, et al. A shallow hybrid classical–quantum spiking feedforward neural network for noise-robust image classification[J]. Applied Soft Computing, 2023, 136: 110099.DOI: 10.1016/j.asoc.2023.110099 .
[22]
KRIZHEVSKYA.Learning multiple layers of features from tiny images[EB/OL].(2009-04-08) [2025-03-16].
[23]
NETZERY, WANGTao, COATESA, et al. Reading digits in natural images with unsupervised feature learning [C]//Neural Information Processing Systems Workshop on Deep Learning and Unsupervised Feature Learning. Granada, Spain: Neural Information Processing Systems Foundation, 2011: 5-16.
[24]
HODGKINA L, HUXLEYA F. Propagation of electrical signals along giant nerve fibers[J]. Proceedings of the Royal Society of London Series B:Biological Sciences, 1952, 140(899): 177-183.
[25]
KABILANR, MUTHUKUMARANN. A neuromorphic model for image recognition using SNN[C]//6th International Conference on Inventive Computation Technologies (ICICT). Coimbatore, India:IEEE, 2021: 720-725.
[26]
IZHIKEVICHE M. Simple model of spiking neurons[J]. IEEE Transactions on Neural Networks, 2003, 14(6): 1569-1572.
[27]
DE VALOISR L, WILLIAM YUNDE, HEPLERN. The orientation and direction selectivity of cells in macaque visual cortex[J]. Vision Research, 1982, 22(5): 531-544.
[28]
DE VALOISR L, ALBRECHTD G, THORELLL G. Spatial frequency selectivity of cells in macaque visual cortex[J]. Vision Research, 1982, 22(5): 545-559.
[29]
RINGACHD L. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex[J]. Journal of Neurophysiology, 2002, 88(1): 455-463.
[30]
HEKaiming, ZHANGXiangyu, RENShaoqing, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA:IEEE, 2016: 770-778.
[31]
GOODFELLOWI J, SHLENSJ, SZEGEDYC, et al. Explaining and harnessing adversarial examples[EB/OL].(2014-12-20)[2025-03-16].
[32]
MADRYA, MAKELOVA, SCHMIDTL, et al. Towards deep learning models resistant to adversarial attacks[EB/OL].(2019-09-04)[2025-03-16].
[33]
SENGUPTAA, YEYuting, WANGR, et al. Going deeper in spiking neural networks: VGG and residual architectures[J]. Frontiers in Neuroscience, 2019,13:95.DOI: 10.3389/fnins.2019.00095 .
[34]
HUYangfang, TANGHuajin, PANGang. Spiking deep residual networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(8):5200-5205.
SELVARAJUR R, COGSWELLM, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[J]. International Journal of Computer Vision, 2020, 128(2): 336-359.