Skip to main content

Security Comparison of Machine Learning Models Facing Different Attack Targets

  • Conference paper
  • First Online:
Science of Cyber Security (SciSec 2019)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11933))

Included in the following conference series:

Abstract

Machine Learning has exhibited great performance in several practical application domains such as computer vision, natural language processing, automatic pilot and so on. As it becomes more and more widely used in practice, its security issues attracted more and more attentions. Previous research shows that machine learning models are very vulnerable when facing different kinds of adversarial attacks. Therefore, we need to evaluate the security of different machine learning models under different attacks. In this paper, we aim to provide a security comparison method for different machine learning models. We firstly classify the adversarial attacks into three classes by their attack targets, respectively attack on test data, attack on train data and attack on model parameters, and give subclasses under different assumptions. Then we consider support vector machine (SVM), neural networks with one hidden layer (NN), and convolution neural networks (CNN) as examples and launch different kinds of attacks on them for evaluating and comparing model securities. Additionally, our experiments illustrate the effects of concealing actions launched by the adversary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  2. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  3. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  4. Biggio, B., et al.: Security evaluation of support vector machines in adversarial environments. In: Ma, Y., Guo, G. (eds.) Support Vector Machines Applications, pp. 105–153. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-02300-7_4

    Chapter  Google Scholar 

  5. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1467–1474. Omnipress (2012)

    Google Scholar 

  6. LeCun, Y., Jackel, L.D., Bottou, L., et al.: Comparison of learning algorithms for handwritten digit recognition. In: International Conference on Artificial Neural Networks, vol. 60, pp. 53–60 (1995)

    Google Scholar 

  7. Wittel, G.L., Wu, S.F.: On attacking statistical spam filters. In: Proceedings of the Conference on E-mail and Anti-Spam (CEAS) (2004)

    Google Scholar 

  8. Papernot, N., McDaniel, P., Goodfellow, I., et al.: Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697 (2016). 1(2): 3

  9. Dalvi, N., Domingos, P., Sanghai, S., et al.: Adversarial classification. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108. ACM (2004)

    Google Scholar 

  10. Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 641–647. ACM (2005)

    Google Scholar 

  11. Barreno, M., Nelson, B., Sears, R., et al.: Can machine learning be secure? Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25. ACM (2006)

    Google Scholar 

  12. Barreno, M., Nelson, B., Joseph, A.D., et al.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)

    Article  MathSciNet  Google Scholar 

  13. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)

    Article  Google Scholar 

  14. Papernot, N., McDaniel, P., Jha, S., et al.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)

    Google Scholar 

  15. Ŝrndic, N., Laskov, P.: Detection of malicious PDF files based on hierarchical document structure. In: Proceedings of the 20th Annual Network & Distributed System Security Symposium, pp. 1–16 (2013)

    Google Scholar 

  16. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)

  17. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  18. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  19. Zhang, C., Bengio, S., Singer, Y.: Are All Layers Created Equal? arXiv preprint arXiv:1902.01996 (2019)

  20. Tabacof, P., Valle, E.: Exploring the space of adversarial images. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 426–433. IEEE (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaofeng Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Z., Jia, Z., Lu, W. (2019). Security Comparison of Machine Learning Models Facing Different Attack Targets. In: Liu, F., Xu, J., Xu, S., Yung, M. (eds) Science of Cyber Security. SciSec 2019. Lecture Notes in Computer Science(), vol 11933. Springer, Cham. https://doi.org/10.1007/978-3-030-34637-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34637-9_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34636-2

  • Online ISBN: 978-3-030-34637-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics