Skip to main content

A Poisoning Attack Against the Recognition Model Trained by the Data Augmentation Method

  • Conference paper
  • First Online:
  • 1201 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12487))

Abstract

The training model often preprocesses the training set with the data augmentation method. Aiming at this kind of training mode, a poisoning attack scheme is proposed in this paper, which can effectively complete the attack. For the traffic sign recognition system, its decision boundary is changed by the way of data poisoning, so that it would incorrectly recognize the target sample. In this scheme, a “backdoor” belonging to the attacker is added to the toxic sample so that the attacker can manipulate recognition model (i.e., the target sample is classified into expected categories). The attack is difficult to detect, because the victim will take a poison sample as a healthy one. The experimental results show that the scheme can successfully attack the model trained by the data augmentation method, realize the attack function against the selected target, and complete the attack with a high success rate. It is hoped that this work will raise awareness of the important issues of data reliability and data sources.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security (AsiaCCS), pp. 16–25. ACM, Taipei, Taiwan, China (2006)

    Google Scholar 

  2. Laskov, P.: Practical evasion of a learning-based classifier: a case study. In: 2014 IEEE Symposium on Security and Privacy(S&P), pp. 197–211. IEEE, San Jose, California, USA (2014)

    Google Scholar 

  3. Rubinstein, B.I., et al.: Antidote: understanding and defending against poisoning of anomaly detectors. In: Proceedings of the 9th ACM SIGCOMM Internet Measurement Conference (CIM), pp. 1–14. ACM, Chicago, Illinois, USA (2009)

    Google Scholar 

  4. Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., Roli, F.: Is feature selection secure against training data poisoning? In: Proceedings of the 32nd International Conference on Machine Learning(ICML), pp. 1689–1698. ACM, Lille, France (2015)

    Google Scholar 

  5. Newell, A., Potharaju, R., Xiang, L., Nita-Rotaru, C.: On the practicality of integrity attacks on document-level sentiment analysis. In: Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop (AISec), pp. 83–93. ACM, Scottsdale, Arizona, USA (2014)

    Google Scholar 

  6. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp. 1625–1634. IEEE, Salt Lake City, Utah, USA (2018)

    Google Scholar 

  7. Nelson, B., et al.: Exploiting machine learning to subvert your spam filter. In: Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats (LEET), pp. 1–9. USENIX Association, San Francisco, California, USA (2008)

    Google Scholar 

  8. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on Machine Learning(ICML), pp. 1467–1474. ACM, Edinburgh, Scotland, UK (2012)

    Google Scholar 

  9. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 3517–3529. MIT Press, Long Beach, California, USA (2017)

    Google Scholar 

  10. Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 27–38. ACM, Dallas, Texas, USA (2017)

    Google Scholar 

  11. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680. MIT Press, Montreal, Quebec, Canada (2014)

    Google Scholar 

  12. Feng, J., Cai, Q.-Z., Zhou, Z.-H.: Learning to confuse: generating training time adversarial data with auto-encoder. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 11971–11981. MIT Press, Vancouver, British Columba, Canada (2019)

    Google Scholar 

  13. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  14. Liu, Y., et al.: Trojaning attack on neural networks (2017)

    Google Scholar 

  15. Suciu, O., Marginean, R., Kaya, Y., Daume III, H., Dumitras, T.: When does machine learning FAIL? generalized transferability for evasion and poisoning attacks. In: 27th USENIX Security Symposium (USENIX Security), pp. 1299–1316. Usenix Association, Baltimore, Maryland, USA (2018)

    Google Scholar 

  16. Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 6103–6113. MIT Press, Montreal, Quebec, Canada (2018)

    Google Scholar 

  17. Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(2), 984–996 (2013)

    Google Scholar 

  18. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence (AISec), pp. 43–58. ACM, Chicago, Illinois, USA (2011)

    Google Scholar 

  19. Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining (SIGKDD), pp. 641–647. ACM, Chicago, Illinois, USA (2005)

    Google Scholar 

  20. Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010). https://doi.org/10.1007/s10994-010-5188-5

    Article  MathSciNet  Google Scholar 

  21. TSRD. http://www.nlpr.ia.ac.cn/PAL/TRAFFICDATA/RECOGNITION.HTML. Accessed 17 May 2020

  22. Šegvic, S., et al.: A computer vision assisted geoinformation inventory for traffic infrastructure. In: 13th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 66–73. IEEE, Funchal, Portugal (2010)

    Google Scholar 

  23. Mathias, M., Timofte, R., Benenson, R., Van Gool, L.: Traffic sign recognition—how far are we from the solution? In: The 2013 International Joint Conference on Neural networks (IJCNN), pp. 1–8. IEEE, Dallas, Texas, USA (2013)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the Natural Science Foundation of China (Nos. U1811264, U1711263, 61966009), and the Natural Science Foundation of Guangxi Province (Nos. 2019GXNSFBA245049, 2019GXNSFBA245059, 2018GXNSFDA281045).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Long Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Y., Li, L., Chang, L., Gu, T. (2020). A Poisoning Attack Against the Recognition Model Trained by the Data Augmentation Method. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62460-6_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62459-0

  • Online ISBN: 978-3-030-62460-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics