Skip to main content

Biologically Inspired Variational Auto-Encoders for Adversarial Robustness

  • Conference paper
  • First Online:
The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022) (DBB 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 541))

Included in the following conference series:

Abstract

Deep Neural Networks (DNNs) have recently become the standard tools for solving problems that can be prohibitive for human or statistical criteria, such as classification problems. Nevertheless, DNNs have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. Adversarial attacks show a security risk to deployed DNNs and indicate a divergence between how DNNs and humans perform classification. It has been illustrated that sleep improves knowledge generalization and improves robustness against noise in animals and humans. This paper proposes a defense algorithm that uses a biologically inspired sleep phase in a Variational Auto-Encoder (Defense–VAE–Sleep) to purge adversarial perturbations from contaminated images. We are demonstrating the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on three datasets: CelebA, MNIST, and Fashion-MNIST. Overall, our results demonstrate the robustness of our proposed model for defending against adversarial attacks and increasing the classification robustness solutions compared with other models: Defense–VAE and Defense–GAN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tadros, T., Krishnan, G., Ramyaa, R., Bazhenov, M.: Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks. In: International Conference on Learning Representations (2019)

    Google Scholar 

  2. Han, X., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17, 151–178 (2020)

    Article  Google Scholar 

  3. Li, X., Ji, S.: Defense-VAE: a fast and accurate defense against adversarial attacks. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1168, pp. 191–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43887-6_15

    Chapter  Google Scholar 

  4. Kendra S Burbank. Mirrored stdp implements autoencoder learning in a network of spiking neurons. PLoS Comput. Biol. 11(12), e1004566 (2015)

    Google Scholar 

  5. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018)

  6. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  7. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  8. Osadchy, M., Hernandez-Castro, J., Gibson, S.J., Dunkelman, O., Pérez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples with applications to captcha. IACR Cryptol. ePrint Arch., vol. 2016, p. 336 (2016)

    Google Scholar 

  9. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014)

  10. Nayebi, A., Ganguli, S.: Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:1703.09202 (2017)

  11. Bagheri, A.: Probabilistic spiking neural networks: Supervised, unsupervised and adversarial trainings (2019)

    Google Scholar 

  12. Talafha, S., Rekabdar, B., Mousas, C., Ekenna, C.: Biologically inspired sleep algorithm for variational auto-encoders. In: Bebis, G., et al. (eds.) ISVC 2020. LNCS, vol. 12509, pp. 54–67. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64556-4_5

    Chapter  Google Scholar 

  13. Walker, M.P., Stickgold, R.: Sleep-dependent learning and memory consolidation. Neuron 44(1), 121–133 (2004)

    Google Scholar 

  14. Roy, S.S., Ahmed, M., Akhand, M.A.H.: Noisy image classification using hybrid deep learning methods. J. Inf. Commun. Technol. 17(2), 233–269 (2018)

    Google Scholar 

  15. Ankit, A., Sengupta, A., Panda, P., Roy, K.: Resparc: a reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In: Proceedings of the 54th Annual Design Automation Conference 2017, pp. 1–6 (2017)

    Google Scholar 

  16. Rueckauer, B., Liu, S.C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE (2018)

    Google Scholar 

  17. Caporale, N., Dan, Y.: Spike timing-dependent plasticity: a hebbian learning rule. Annu. Rev. Neurosci. 31, 25–46 (2008)

    Article  Google Scholar 

  18. Koch, G., Ponzo, V., Di Lorenzo, F., Caltagirone, C., Veniero, D.: Hebbian and anti-hebbian spike-timing-dependent plasticity of human cortico-cortical connections. J. Neurosci. 33(23), 9725–9733 (2013)

    Article  Google Scholar 

  19. Kim, H., Mnih, A.: Disentangling by factorising. arXiv preprint arXiv:1802.05983 (2018)

  20. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 31, 8778–8788 (2018)

    Google Scholar 

  21. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)

  22. Maass, W.: Neural computation with winner-take-all as the only nonlinear operation. Adv. Neural Inf. Process. Syst. 12, 293–299 (2000)

    Google Scholar 

  23. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  24. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  25. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)

    Google Scholar 

  26. Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11 (2018)

    Google Scholar 

Download references

Acknowledgement

This work is supported by Google Cloud credits for academic research. We thank the Google cloud platform for giving us access to computing power that will make the next big thing possible.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sameerah Talafha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Talafha, S., Rekabdar, B., Mousas, C., Ekenna, C. (2023). Biologically Inspired Variational Auto-Encoders for Adversarial Robustness. In: Awan, I., Younas, M., Bentahar, J., Benbernou, S. (eds) The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022). DBB 2022. Lecture Notes in Networks and Systems, vol 541. Springer, Cham. https://doi.org/10.1007/978-3-031-16035-6_7

Download citation

Publish with us

Policies and ethics