Abstract
Deep Neural Networks (DNNs) have recently become the standard tools for solving problems that can be prohibitive for human or statistical criteria, such as classification problems. Nevertheless, DNNs have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. Adversarial attacks show a security risk to deployed DNNs and indicate a divergence between how DNNs and humans perform classification. It has been illustrated that sleep improves knowledge generalization and improves robustness against noise in animals and humans. This paper proposes a defense algorithm that uses a biologically inspired sleep phase in a Variational Auto-Encoder (Defense–VAE–Sleep) to purge adversarial perturbations from contaminated images. We are demonstrating the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on three datasets: CelebA, MNIST, and Fashion-MNIST. Overall, our results demonstrate the robustness of our proposed model for defending against adversarial attacks and increasing the classification robustness solutions compared with other models: Defense–VAE and Defense–GAN.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Tadros, T., Krishnan, G., Ramyaa, R., Bazhenov, M.: Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks. In: International Conference on Learning Representations (2019)
Han, X., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17, 151–178 (2020)
Li, X., Ji, S.: Defense-VAE: a fast and accurate defense against adversarial attacks. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1168, pp. 191–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43887-6_15
Kendra S Burbank. Mirrored stdp implements autoencoder learning in a network of spiking neurons. PLoS Comput. Biol. 11(12), e1004566 (2015)
Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)
Osadchy, M., Hernandez-Castro, J., Gibson, S.J., Dunkelman, O., Pérez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples with applications to captcha. IACR Cryptol. ePrint Arch., vol. 2016, p. 336 (2016)
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014)
Nayebi, A., Ganguli, S.: Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:1703.09202 (2017)
Bagheri, A.: Probabilistic spiking neural networks: Supervised, unsupervised and adversarial trainings (2019)
Talafha, S., Rekabdar, B., Mousas, C., Ekenna, C.: Biologically inspired sleep algorithm for variational auto-encoders. In: Bebis, G., et al. (eds.) ISVC 2020. LNCS, vol. 12509, pp. 54–67. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64556-4_5
Walker, M.P., Stickgold, R.: Sleep-dependent learning and memory consolidation. Neuron 44(1), 121–133 (2004)
Roy, S.S., Ahmed, M., Akhand, M.A.H.: Noisy image classification using hybrid deep learning methods. J. Inf. Commun. Technol. 17(2), 233–269 (2018)
Ankit, A., Sengupta, A., Panda, P., Roy, K.: Resparc: a reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In: Proceedings of the 54th Annual Design Automation Conference 2017, pp. 1–6 (2017)
Rueckauer, B., Liu, S.C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE (2018)
Caporale, N., Dan, Y.: Spike timing-dependent plasticity: a hebbian learning rule. Annu. Rev. Neurosci. 31, 25–46 (2008)
Koch, G., Ponzo, V., Di Lorenzo, F., Caltagirone, C., Veniero, D.: Hebbian and anti-hebbian spike-timing-dependent plasticity of human cortico-cortical connections. J. Neurosci. 33(23), 9725–9733 (2013)
Kim, H., Mnih, A.: Disentangling by factorising. arXiv preprint arXiv:1802.05983 (2018)
Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 31, 8778–8788 (2018)
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)
Maass, W.: Neural computation with winner-take-all as the only nonlinear operation. Adv. Neural Inf. Process. Syst. 12, 293–299 (2000)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp. 39–57. IEEE (2017)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)
Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11 (2018)
Acknowledgement
This work is supported by Google Cloud credits for academic research. We thank the Google cloud platform for giving us access to computing power that will make the next big thing possible.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Talafha, S., Rekabdar, B., Mousas, C., Ekenna, C. (2023). Biologically Inspired Variational Auto-Encoders for Adversarial Robustness. In: Awan, I., Younas, M., Bentahar, J., Benbernou, S. (eds) The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022). DBB 2022. Lecture Notes in Networks and Systems, vol 541. Springer, Cham. https://doi.org/10.1007/978-3-031-16035-6_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-16035-6_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16034-9
Online ISBN: 978-3-031-16035-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)