Skip to main content

Computational Analysis of Robustness in Neural Network Classifiers

  • Conference paper
  • First Online:
Book cover Artificial Neural Networks and Machine Learning – ICANN 2020 (ICANN 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12396))

Included in the following conference series:

  • 3036 Accesses

Abstract

Neural networks, especially deep architectures, have proven excellent tools in solving various tasks, including classification. However, they are susceptible to adversarial inputs, which are similar to original ones, but yield incorrect classifications, often with high confidence. This reveals the lack of robustness in these models. In this paper, we try to shed light on this problem by analyzing the behavior of two types of trained neural networks: fully connected and convolutional, using MNIST, Fashion MNIST, SVHN and CIFAR10 datasets. All networks use a logistic activation function whose steepness we manipulate to study its effect on network robustness. We also generated adversarial examples with FGSM method and by perturbing those pixels that fool the network most effectively. Our experiments reveal a trade-off between accuracy and robustness of the networks, where models with a logistic function approaching a threshold function (very steep slope) appear to be more robust against adversarial inputs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deng, L., Yu, D.: Deep learning: methods and applications. Found. Trends Signal Process. 7(3–4), 197–387 (2014). https://doi.org/10.1561/2000000039

    Article  MathSciNet  MATH  Google Scholar 

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). http://arxiv.org/abs/1412.6572

  3. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report TR-2009, University of Toronto (2009)

    Google Scholar 

  4. Kuzma, T., Farkaš, I.: Computational analysis of learned representations in deep neural network classifiers. In: IEEE International Joint Conference on Neural Networks (IJCNN) (2018)

    Google Scholar 

  5. LeCun, Y., Cortes, C., Burges, C.: The MNIST database of handwritten digits (2010). http://yann.lecun.com/exdb/mnist/

  6. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Proc. 73, 1–15 (2018). https://doi.org/10.1016/j.dsp.2017.10.011

    Article  MathSciNet  Google Scholar 

  7. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)

    Google Scholar 

  8. Papernot, N., McDaniel, P.D., Sinha, A., Wellman, M.P.: Towards the science of security and privacy in machine learning (2016). http://arxiv.org/abs/1611.03814

  9. Ranjan, R., Sankaranarayanan, S., Castillo, C.D., Chellappa, R.: Improving network robustness against adversarial attacks with compact convolution. CoRR (2017). http://arxiv.org/abs/1712.00699

  10. Rozsa, A., Rudd, E., Boult, T.: Adversarial diversity and hard positive generation. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 410–417 (2016). https://doi.org/10.1109/CVPRW.2016.58

  11. Schott, L., Rauber, J., Brendel, W., Bethge, M.: Robust perception through analysis by synthesis. CoRR (2018). http://arxiv.org/abs/1805.09190

  12. Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). https://doi.org/10.1109/TEVC.2019.2890858

    Article  Google Scholar 

  13. Sun, K., Zhu, Z., Lin, Z.: Enhancing the robustness of deep neural networks by boundary conditional GAN (2019). http://arxiv.org/abs/1902.11029

  14. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199

  15. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017). https://arxiv.org/abs/1708.07747

  16. Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., Xu, W.: DolphinAttack: inaudible voice commands. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, pp. 103–117 (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported by projects VEGA 1/0796/18 and KEGA 042UK-4/2019. We thank anonymous reviewers for useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iveta Bečková .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bečková, I., Pócoš, Š., Farkaš, I. (2020). Computational Analysis of Robustness in Neural Network Classifiers. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12396. Springer, Cham. https://doi.org/10.1007/978-3-030-61609-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61609-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61608-3

  • Online ISBN: 978-3-030-61609-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics