Abstract
The reliability of deep neural networks is critical for industrial applications as well as human safety and security. However, artificial deep neural networks have been found vulnerable to multiple kinds of natural, artificial, and adversarial image perturbations. In contrast, the human visual system has a remarkable robustness against a wide range of perturbations. At present, it is still unclear what mechanisms underlie this robustness. To better understand the robustness of biologically grounded neural networks, we evaluated two different biologically grounded neural networks of the primate visual system for their vulnerabilities to various image perturbations. We study a rate-based neural network, which utilizes Hebbian synaptic, intrinsic, and structural plasticity within a multi-layer neocortex-like architecture that includes feedforward excitation and inhibition, lateral inhibition, as well as feedback excitation and inhibition, and a spike-based neural network that focuses on a high degree of biologically plausible excitatory as well as inhibitory spike-timing-dependent plasticity. Both networks have been trained on natural scenes and have been earlier demonstrated to learn receptive fields and response properties of the visual cortex, and perform convincingly in object recognition in common computer vision benchmarks. We examine a subset of image perturbations from the corrupted MNIST dataset (MNIST-C) with the aim to test structural different perturbations. The investigated perturbations are namely Gaussian noise and blur, contrast, rotation, frost, and multi-line distractors. We applied them on the MNIST and EMNIST dataset. We report the degradation of recognition performance at different levels of perturbation intensity and indicate the improvement of the individual layers of both considered network types in comparison to the preprocessed input (LGN) as baseline.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Clopath, C., Büsing, L., Vasilaki, E., Gerstner, W.: Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat. Neurosci. 13(3), 344–52 (2010). https://doi.org/10.1038/nn.2479. http://www.ncbi.nlm.nih.gov/pubmed/20098420
Cohen, G., Afshar, S., Tapson, J., Van Schaik, A.: EMNIST: extending MNIST to handwritten letters. In: Proceedings of the International Joint Conference on Neural Networks, May 2017, pp. 2921–2926 (2017). https://doi.org/10.1109/IJCNN.2017.7966217
Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D.D., DiCarlo, J.J.: Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. In: Advances in Neural Information Processing Systems, December 2020 (2020). https://doi.org/10.1101/2020.06.16.154542
Evans, B.D., Malhotra, G., Bowers, J.S.: Biological convolutions improve DNN robustness to noise and generalisation. Neural Netw. 148, 96–110 (2022). https://doi.org/10.1016/j.neunet.2021.12.005
Geirhos, R., Schütt, H.H., Medina Temme, C.R., Bethge, M., Rauber, J., Wichmann, F.A.: Generalisation in humans and deep neural networks. In: Advances in Neural Information Processing Systems, NeurIPS 2018, December 2018, pp. 7538–7550 (2018)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations, ICLR 2019, pp. 1–16 (2019)
Kar, K., Kubilius, J., Schmidt, K., Issa, E.B., DiCarlo, J.J.: Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nat. Neurosci. 22(6), 974–983 (2019). https://doi.org/10.1038/s41593-019-0392-5
Kermani Kolankeh, A., Teichmann, M., Hamker, F.H.: Competition improves robustness against loss of information. Front. Comput. Neurosci. 9, 1–12 (2015). https://doi.org/10.3389/fncom.2015.00035. http://journal.frontiersin.org/article/10.3389/fncom.2015.00035
Kreiman, G., Serre, T.: Beyond the feedforward sweep: feedback computations in the visual cortex. Ann. N. Y. Acad. Sci. 1464(1), 222–241 (2020). https://doi.org/10.1111/nyas.14320
Kubilius, J., et al.: Brain-like object recognition with high-performing shallow recurrent ANNs. Adv. Neural Inf. Process. Syst. 32(NeurIPS), 1–12 (2019)
Larisch, R., Gönner, L., Teichmann, M., Hamker, F.H.: Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLOS Comput. Biol. 17(11), e1009566 (2021). https://doi.org/10.1371/journal.pcbi.1009566. https://dx.plos.org/10.1371/journal.pcbi.1009566
Larisch, R., Teichmann, M., Hamker, F.H.: A neural spiking approach compared to deep feedforward networks on stepwise pixel erasement. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11139, pp. 253–262. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01418-6_25
Larisch, R., Vitay, J., Hamker, F.H.: Detecting anomalies in system logs with a compact convolutional transformer. IEEE Access 11 (2023). https://doi.org/10.1109/ACCESS.2023.3323252
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791. http://ieeexplore.ieee.org/document/726791/
Lindsay, G.W., Mrsic-flogel, T., Sahani, M.: Bio-inspired neural networks implement different recurrent visual processing strategies than task-trained ones do. bioRxiv preprint, pp. 1–16 (2022). https://doi.org/10.1101/2022.03.07.483196
Mu, N., Gilmer, J.: MNIST-C: A Robustness Benchmark for Computer Vision (2019). http://arxiv.org/abs/1906.02337
Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996). https://doi.org/10.1038/381607a0. http://www.ncbi.nlm.nih.gov/pubmed/8637596
Rajaei, K., Mohsenzadeh, Y., Ebrahimpour, R., Khaligh-Razavi, S.M.: Beyond core object recognition: recurrent processes account for object recognition under occlusion. PLoS Comput. Biol. 15(5), 1–30 (2019). https://doi.org/10.1371/journal.pcbi.1007001
Teichmann, M., Larisch, R., Hamker, F.H.: Performance of biologically grounded models of the early visual system on standard object recognition tasks. Neural Netw. (2021). https://doi.org/10.1016/j.neunet.2021.08.009. https://linkinghub.elsevier.com/retrieve/pii/S0893608021003142
Vogels, T.P., Sprekeler, H., Zenke, F., Clopath, C., Gerstner, W.: Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334(6062), 1569–1573 (2011). https://doi.org/10.1126/science.1211095
Zhang, H., Chen, H., Song, Z., Boning, D., Dhillon, I.S., Hsieh, C.J.: The limitations of adversarial training and the blind-spot attack. In: 8th International Conference on Learning Representations, ICLR 2019, pp. 1–12, January 2019. https://doi.org/1721.1/130088. https://hdl.handle.net/1721.1/130088. http://arxiv.org/abs/1901.04684
Acknowledgments
This work has been partly funded by the Saxony State Ministry of Science and Art (SMWK3-7304/35/3- 2021/4819) research initiative “Instant Teaming between Humans and Production Systems”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Teichmann, M., Larisch, R., Hamker, F.H. (2024). Robustness of Biologically Grounded Neural Networks Against Image Perturbations. In: Wand, M., Malinovská, K., Schmidhuber, J., Tetko, I.V. (eds) Artificial Neural Networks and Machine Learning – ICANN 2024. ICANN 2024. Lecture Notes in Computer Science, vol 15025. Springer, Cham. https://doi.org/10.1007/978-3-031-72359-9_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-72359-9_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72358-2
Online ISBN: 978-3-031-72359-9
eBook Packages: Computer ScienceComputer Science (R0)