Skip to main content

Poisoning-Attack Detection Using an Auto-encoder for Deep Learning Models

  • Conference paper
  • First Online:
Digital Forensics and Cyber Crime (ICDF2C 2022)

Abstract

Modern Deep Learning DL models can be trained in various ways, including incremental learning. The idea is that a user whose model has been trained on his own data will perform better on new data. The model owner can share its model with other users, who can then train it on their data and return it to the model owner. However, these users can perform poisoning attacks PA by modifying the model’s behavior in the attacker’s favor. In the context of incremental learning, we are interested in detecting a DL model for image classification that has undergone a poisoning attack. To perform such attacks, an attacker can, for example, modify the labels of some training data, which is then used to fine-tune the model in such a way that the attacked model will incorrectly classify images similar to the attacked images, while maintaining good classification performance on other images. As a countermeasure, we propose a poisoned model detector that is capable of detecting various types of PA attacks. This technique exploits the reconstruction error of a machine learning-based auto-encoder AE trained to model the distribution of the activation maps from the second-to-last layer of the model to protect. By analyzing AE reconstruction errors for some given inputs, we demonstrate that a PA can be distinguished from a fine-tuning operation that can be used to improve classification performance. We demonstrate the performance of our method on a variety of architectures and in the context of a DL model for mass cancer detection in mammography images.

This work was partly supported by the Joint Laboratory SePEMeD, ANR-13-LAB2-0006-01, and the French ANR via the European program “Preservation of R&D employment in the framework of the French recovery plan” under the reference ANR-21-PRRD-0027-01.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bellafqira, R., Coatrieux, G., Genin, E., Cozic, M.: Secure multilayer perceptron based on homomorphic encryption. In: Yoo, C.D., Shi, Y.-Q., Kim, H.J., Piva, A., Kim, G. (eds.) IWDW 2018. LNCS, vol. 11378, pp. 322–336. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11389-6_24

    Chapter  Google Scholar 

  2. Castro, F.M., Marín-Jiménez, M.J., Guil, N., Schmid, C., Alahari, K.: End-to-end incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 233–248 (2018)

    Google Scholar 

  3. Chen, J., Zhang, X., Zhang, R., Wang, C., Liu, L.: De-pois: an attack-agnostic defense against data poisoning attacks. IEEE Trans. Inf. Forensics Secur. 16, 3412–3425 (2021)

    Article  Google Scholar 

  4. Cinà, A.E., Grosse, K., Demontis, A., Biggio, B., Roli, F., Pelillo, M.: Machine learning security against data poisoning: are we there yet? arXiv preprint arXiv:2204.05986 (2022)

  5. Gu, Z., Yang, Y.: Detecting malicious model updates from federated learning on conditional variational autoencoder. In: 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 671–680. IEEE (2021)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  7. Jiang, W., Li, H., Liu, S., Luo, X., Lu, R.: Poisoning and evasion attacks against deep learning algorithms in autonomous vehicles. IEEE Trans. Veh. Technol. 69(4), 4439–4449 (2020)

    Article  Google Scholar 

  8. Joyce, J.M.: Kullback-leibler divergence. In: Lovric, M. (ed.) International Encyclopedia of Statistical Science, pp. 720–722. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-04898-2_327

    Chapter  Google Scholar 

  9. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  10. Lee, R.S., Gimenez, F., Hoogi, A., Miyake, K.K., Gorovoy, M., Rubin, D.L.: A curated mammography data set for use in computer-aided detection and diagnosis research. Scientific data 4(1), 1–9 (2017)

    Article  Google Scholar 

  11. Li, S., Cheng, Y., Wang, W., Liu, Y., Chen, T.: Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211 (2020)

  12. Madani, P., Vlajic, N.: Robustness of deep autoencoder in intrusion detection under adversarial contamination. In: Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security, pp. 1–8 (2018)

    Google Scholar 

  13. Meidan, Y., et al.: N-baiot-network-based detection of IoT botnet attacks using deep autoencoders. IEEE Pervasive Comput. 17(3), 12–22 (2018)

    Article  Google Scholar 

  14. Miller, D.J., Xiang, Z., Kesidis, G.: Adversarial learning targeting deep neural network classification: a comprehensive review of defenses against attacks. Proc. IEEE 108(3), 402–433 (2020)

    Article  Google Scholar 

  15. Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., Jha, N.K.: Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE J. Biomed. Health Inform. 19(6), 1893–1905 (2014)

    Article  Google Scholar 

  16. Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 27–38 (2017)

    Google Scholar 

  17. Razmi, F., Xiong, L.: Classification auto-encoder based detector against diverse data poisoning attacks. arXiv preprint arXiv:2108.04206 (2021)

  18. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  19. Shamir, O., Srebro, N., Zhang, T.: Communication-efficient distributed optimization using an approximate newton-type method. In: International Conference on Machine Learning, pp. 1000–1008. PMLR (2014)

    Google Scholar 

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  21. Smith, V., Chiang, C.K., Sanjabi, M., Talwalkar, A.S.: Federated multi-task learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  22. Soni, R., Paliya, S., Gupta, L.: Security threats to machine learning systems. In: 2022 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), pp. 1–3. IEEE (2022)

    Google Scholar 

  23. Wang, C., Chen, J., Yang, Y., Ma, X., Liu, J.: Poisoning attacks and countermeasures in intelligent networks: Status quo and prospects. Digital Commun. Netw. 8, 225–234 (2021)

    Article  Google Scholar 

  24. Wang, Y.X., Ramanan, D., Hebert, M.: Growing a brain: fine-tuning by increasing model capacity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2471–2480 (2017)

    Google Scholar 

  25. Yan, Y., Conze, P.H., Lamard, M., Quellec, G., Cochener, B., Coatrieux, G.: Towards improved breast mass detection using dual-view mammogram matching. Med. Image Anal. 71, 102083 (2021)

    Article  Google Scholar 

  26. Yu, F.: A comprehensive guide to fine-tuning deep learning models in keras (part i). Felix Yu (2020)

    Google Scholar 

  27. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bellafqira Reda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Anass, E.M., Gouenou, C., Reda, B. (2023). Poisoning-Attack Detection Using an Auto-encoder for Deep Learning Models. In: Goel, S., Gladyshev, P., Nikolay, A., Markowsky, G., Johnson, D. (eds) Digital Forensics and Cyber Crime. ICDF2C 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 508. Springer, Cham. https://doi.org/10.1007/978-3-031-36574-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36574-4_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36573-7

  • Online ISBN: 978-3-031-36574-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics