Abstract
Machine Learning has exhibited great performance in several practical application domains such as computer vision, natural language processing, automatic pilot and so on. As it becomes more and more widely used in practice, its security issues attracted more and more attentions. Previous research shows that machine learning models are very vulnerable when facing different kinds of adversarial attacks. Therefore, we need to evaluate the security of different machine learning models under different attacks. In this paper, we aim to provide a security comparison method for different machine learning models. We firstly classify the adversarial attacks into three classes by their attack targets, respectively attack on test data, attack on train data and attack on model parameters, and give subclasses under different assumptions. Then we consider support vector machine (SVM), neural networks with one hidden layer (NN), and convolution neural networks (CNN) as examples and launch different kinds of attacks on them for evaluating and comparing model securities. Additionally, our experiments illustrate the effects of concealing actions launched by the adversary.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Biggio, B., et al.: Security evaluation of support vector machines in adversarial environments. In: Ma, Y., Guo, G. (eds.) Support Vector Machines Applications, pp. 105–153. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-02300-7_4
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1467–1474. Omnipress (2012)
LeCun, Y., Jackel, L.D., Bottou, L., et al.: Comparison of learning algorithms for handwritten digit recognition. In: International Conference on Artificial Neural Networks, vol. 60, pp. 53–60 (1995)
Wittel, G.L., Wu, S.F.: On attacking statistical spam filters. In: Proceedings of the Conference on E-mail and Anti-Spam (CEAS) (2004)
Papernot, N., McDaniel, P., Goodfellow, I., et al.: Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697 (2016). 1(2): 3
Dalvi, N., Domingos, P., Sanghai, S., et al.: Adversarial classification. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108. ACM (2004)
Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 641–647. ACM (2005)
Barreno, M., Nelson, B., Sears, R., et al.: Can machine learning be secure? Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25. ACM (2006)
Barreno, M., Nelson, B., Joseph, A.D., et al.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)
Papernot, N., McDaniel, P., Jha, S., et al.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)
Ŝrndic, N., Laskov, P.: Detection of malicious PDF files based on hierarchical document structure. In: Proceedings of the 20th Annual Network & Distributed System Security Symposium, pp. 1–16 (2013)
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Zhang, C., Bengio, S., Singer, Y.: Are All Layers Created Equal? arXiv preprint arXiv:1902.01996 (2019)
Tabacof, P., Valle, E.: Exploring the space of adversarial images. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 426–433. IEEE (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Z., Jia, Z., Lu, W. (2019). Security Comparison of Machine Learning Models Facing Different Attack Targets. In: Liu, F., Xu, J., Xu, S., Yung, M. (eds) Science of Cyber Security. SciSec 2019. Lecture Notes in Computer Science(), vol 11933. Springer, Cham. https://doi.org/10.1007/978-3-030-34637-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-34637-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34636-2
Online ISBN: 978-3-030-34637-9
eBook Packages: Computer ScienceComputer Science (R0)