Skip to main content

Decision Boundary to Improve the Sensitivity of Deep Neural Networks Models

  • Conference paper
  • First Online:
Business Intelligence (CBI 2022)

Abstract

In spite of their performance and relevance on various image classification fields, deep neural network classifiers encounter real difficulties face of minor information perturbations. In particular, the presence of contradictory examples causes a big weakness and insufficiency of deep learning models in many areas, such as illness recognition. The aim of our paper is to improve the robustness of deep neural network models to small input perturbations using standard training and adversial training to maximize the distance between predict instances and the boundary decision area. We shows the decision boundary performance of deep neural networks during model training, the minimum distance of the input images from the decision boundary area and how this distance develops during the deep neural network training. The results shows that the distance between the images and the decision boundary decreases during standard training. However, adversarial training increases this distance, which improve the performance of our model. Our work presents a new solution to the deep neural networks sensitivity problem. We found a very strong relationship between the efficiency of the deep neural networks model and the training phase. We can say that the efficiency is created during training, it is not predetermined by the initialization or architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yu, T., Hu, S., Guo, C., Chao, W.-L., Weinberger, K.Q.: A new defense against adversarial images: turning a weakness into a strength. In: NeurIPS (2019)

    Google Scholar 

  2. Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: ICLR (2018)

    Google Scholar 

  3. Schott, L., Rauber, J., Bethge, M., Brendel, W.: Towards the first adversarially robust neural network model on MNIST. In: ICLR (2019)

    Google Scholar 

  4. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (2016)

    Google Scholar 

  5. Nar, K., Ocal, O., Sastry, S.S., Ramchandran, K.: Cross-entropy loss and low-rank features have responsibility for adversarial examples. arXiv:1901.08360 (2019)

    Google Scholar 

  6. Metzen, J.H., Kumar, M., Brox, T., Fischer, V.: Universal adversarial perturbations against semantic image segmentation. In: IEEE International Conference on Computer Vision (ICCV), pp. 2774–2783 (2017)

    Google Scholar 

  7. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  8. Kang, D., Sun, Y., Hendrycks, D., Brown, T., Steinhardt, J.: Testing robustness against unforeseen adversaries. arXiv:1908.08016 (2019)

    Google Scholar 

  9. Jiang, Y., Krishnan, D., Mobahi, H., Bengio, S.: Predicting the generalization gap in deep networks with margin distributions. In: ICLR (2019)

    Google Scholar 

  10. He, W., Li, B., Song, D.: Decision boundary analysis of adversarial examples. In: ICLR (2018)

    Google Scholar 

  11. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. In: ICLR (Workshop) (2015)

    Google Scholar 

  12. Ford, N., Gilmer, J., Carlini, N., Cubuk, D.: Adversarial examples are a natural consequence of test error in noise. In: ICML (2019)

    Google Scholar 

  13. Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. In: NeurIPS (2018)

    Google Scholar 

  14. Dodge, S., Karam, L.: A study and comparison of human and deep learning recognition performance under visual distortions. In: 2017 26th International Conference on Computer Communication and Networks (ICCCN), pp. 1–7. IEEE (2017)

    Google Scholar 

  15. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)

    Google Scholar 

  16. Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: an empirical study (2020)

    Google Scholar 

  17. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML (2018)

    Google Scholar 

  18. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  19. Volpi, R., Murino, V.: Addressing model vulnerability to distributional shifts over image transformation sets. In: ICCV (2019)

    Google Scholar 

  20. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  21. Shafahi, A., Huang, W.R., Studer, C., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? In: ICLR (2019)

    Google Scholar 

  22. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Mdry, A.: Adversarially robust generalization requires more data. In: NeurIPS (2018)

    Google Scholar 

  23. Papernot, N., McDaniel, P.: Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765 (2018)

    Google Scholar 

  24. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR (2016)

    Google Scholar 

  25. Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: Proceedings of 5th International Conference on Learning Representations (ICLR) (2017)

    Google Scholar 

  26. Johnson, J.: Deep, skinny neural networks are not universal approximators. In: ICLR (Poster) (2019)

    Google Scholar 

  27. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019)

    Google Scholar 

  28. Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. In: ICLR (2018)

    Google Scholar 

  29. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (Poster) (2014)

    Google Scholar 

  30. Fawzi, A., Moosavi-Dezfooli, S.-M., Frossard, P., Soatto, S.: Classification regions of deep neural networks. arXiv:1705.09552 (2017)

    Google Scholar 

  31. Eykholt, K., et al.: Physical adversarial examples for object detectors. In: WOOT 2018 Proceedings of the 12th USENIX Conference on Offensive Technologies (2018)

    Google Scholar 

  32. Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y., Usunier, N.: Parseval networks: Improving robustness to adversarial examples. In: ICML (2017)

    Google Scholar 

  33. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  34. Assion, F., et al.: The attack generator: a systematic approach towards constructing adversarial attacks. In: CVPR SAIAD (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Ouriha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ouriha, M., El Habouz, Y., El Mansouri, O. (2022). Decision Boundary to Improve the Sensitivity of Deep Neural Networks Models. In: Fakir, M., Baslam, M., El Ayachi, R. (eds) Business Intelligence. CBI 2022. Lecture Notes in Business Information Processing, vol 449. Springer, Cham. https://doi.org/10.1007/978-3-031-06458-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06458-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06457-9

  • Online ISBN: 978-3-031-06458-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics