Skip to main content
Log in

Adversarial attack-based security vulnerability verification using deep learning library for multimedia video surveillance

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Recently, although deep learning has been employed in various fields, it poses the risk of a possible adversarial attack. In this study, we experimentally verified that classification accuracy in the image classification model of deep learning is lowered by adversarial samples generated by malicious attackers. We used the MNIST dataset, a representative image sample, and the NSL-KDD dataset, a representative network data. We measured the detection accuracy by injecting adversarial samples into the Autoencoder and Convolution Neural Network (CNN) classification models created using the TensorFlow and PyTorch libraries. Adversarial samples were generated by transforming the MNIST and NSL-KDD test datasets using the Jacobian-based Saliency Map Attack (JSMA) method and Fast Gradient Sign Method (FGSM). While measuring the accuracy by injecting the samples into the classification model, we verified that the detection accuracy was reduced by a minimum of 21.82% and a maximum of 39.08%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Abadi M et al (2016) Tensorflow: a system for large-scale machine learning. In: OSDI, vol 16

  2. Bottou L et al (1994) Comparison of classifier methods: a case study in handwritten digit recognition. In: Pattern recognition, 1994. Vol. 2-conference B: computer vision & image processing., proceedings of the 12th IAPR international. Conference on. Vol. 2. IEEE

  3. Finlayson SG, Kohane IS, Beam AL (2018) Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296

    Google Scholar 

  4. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

    Google Scholar 

  5. Heckerman D, Meek C (1997) Models and selection criteria for regression and classification. In: Proceedings of the thirteenth conference on uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc.

  6. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proces Syst

  7. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533

    Google Scholar 

  8. Papernot N et al (2016) The limitations of deep learning in adversarial settings. In: Security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE

  9. Papernot N et al (2016) Cleverhans v2. 0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768

    Google Scholar 

  10. Papernot N et al (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. ACM

  11. Vincent P et al (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  12. Zhang G et al (2017) DolphinAttack: inaudible voice commands. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. ACM

  13. KDD Cup (1999) Available on: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html. Accessed 26 Jan 2019

Download references

Acknowledgments

- This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2018-2016-0-00304) supervised by the IITP (Institute for Information & communications Technology Promotion)

- This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2018R1D1A1B07043349)

- This research was supported by an IITP grant funded by the Korean government (MSIT) (No. 2018-0-00336, Advanced Manufacturing Process Anomaly Detection to prevent the Smart Factory Operation Failure by Cyber Attacks)

- This work was supported by the Ajou University research fund

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taeshik Shon.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jeong, J., Kwon, S., Hong, MP. et al. Adversarial attack-based security vulnerability verification using deep learning library for multimedia video surveillance. Multimed Tools Appl 79, 16077–16091 (2020). https://doi.org/10.1007/s11042-019-7262-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-7262-8

Keywords

Navigation