Skip to main content

BHAC-MRI: Backdoor and Hybrid Attacks on MRI Brain Tumor Classification Using CNN

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2023 (ICIAP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14234))

Included in the following conference series:

  • 602 Accesses

Abstract

Deep learning (DL) models have demonstrated impressive performance in sensitive medical applications such as disease diagnosis. However, a backdoor attack embedded in the clean dataset without poisoning the labels poses a severe threat to the integrity of Artificial Intelligence (AI) technology. In literature, a lot of work has been done on backdoor attacks for medical applications, in which most of the authors assumed that the labels of the samples are also poisoned. This compromises the elusiveness of the backdoor attacks because poisoned samples can be identified by visual inspection by finding the mismatch between the labels of the samples. In this paper, an elusive backdoor attack is proposed, that makes the poisoned samples difficult to recognize. In the proposed approach a backdoor signal superimposed into a small portion of the clean dataset during training time is proposed. Moreover, this paper proposes a hybrid attack that further increases the Attack Success Rate (ASR). The proposed approach is evaluated over a Convolutional Neural Network (CNN)-based system for Magnetic Resonance Imaging (MRI) brain tumor classification, which demonstrated the effectiveness of the attacks, thus raising concern regarding using AI in sensitive applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Garg, N., Ashrith, K.S., Parveen, G.S., Sai, K.G., Chintamaneni, A., Hasan, F.: Self-driving car to drive autonomously using image processing and deep learning. Int. J. Res. Eng. Sci. Manage. 5(1), 125–132 (2022)

    Google Scholar 

  2. Chandana, V.S., Vasavi, S.: Autonomous drones based forest surveillance using Faster R-CNN. In: 2022 International Conference on Electronics and Renewable Systems (ICEARS), pp. 1718–1723. IEEE, March 2022

    Google Scholar 

  3. Hassan, M.R., et al.: Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence. Futur. Gener. Comput. Syst. 127, 462–472 (2022)

    Article  Google Scholar 

  4. Hirano, H., Minagi, A., Takemoto, K.: Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 21(1), 1–13 (2021)

    Article  Google Scholar 

  5. Kwon, H., Kim, Y.: BlindNet backdoor: attack on deep neural network using blind watermark. Multimedia Tools Appl. 81(5), 6217–6234 (2022)

    Article  Google Scholar 

  6. Joel, M.Z., et al.: Adversarial attack vulnerability of deep learning models for oncologic images. medRxiv, January 2021

    Google Scholar 

  7. Yang, C., Wu, Q., Li, H., Chen, Y.: Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340 (2017)

  8. Liao, C., Zhong, H., Squicciarini, A., Zhu, S., Miller, D.: Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307 (2018)

  9. Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)

  10. Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 101–105. IEEE, September 2019

    Google Scholar 

  11. Turner, A., Tsipras, D., Madry, A.: Clean-label backdoor attacks (2018)

    Google Scholar 

  12. Xiao, H., Xiao, H., Eckert, C.: Adversarial label flips attack on support vector machines. In: ECAI 2012, pp. 870–875. IOS Press (2012)

    Google Scholar 

  13. Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., Roli, F.: Support vector machines under adversarial label contamination. Neurocomputing 160, 53–62 (2015)

    Article  Google Scholar 

  14. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894. PMLR, July 2017

    Google Scholar 

  15. Mei, S., Zhu, X.: Using machine teaching to identify optimal training-set attacks on machine learners. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, February 2015

    Google Scholar 

  16. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  17. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  18. Nwadike, M., Miyawaki, T., Sarkar, E., Maniatakos, M., Shamout, F.: Explainability matters: backdoor attacks on medical imaging. arXiv preprint arXiv:2101.00008 (2020)

  19. Matsuo, Y., Takemoto, K.: Backdoor attacks to deep neural network-based system for COVID-19 detection from chest X-ray images. Appl. Sci. 11(20), 9556 (2021)

    Article  Google Scholar 

  20. Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. Robustness: investigating medical imaging networks using adversarial examples. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 493–501. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_56

    Chapter  Google Scholar 

  21. Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D.: FIBA: frequency-injection based backdoor attack in medical image analysis. arXiv preprint arXiv:2112.01148 (2021)

  22. Wang, S., Nepal, S., Rudolph, C., Grobler, M., Chen, S., Chen, T.: Backdoor attacks against transfer learning with pre-trained deep learning models. IEEE Trans. Serv. Comput. 15(3), 1526–1539 (2020)

    Article  Google Scholar 

  23. Bhuvaji, S., Kadam, A., Bhumkar, P., Dedge, S., Kanchan, S.: Brain Tumor Classification (MRI), [Dataset]. Kaggle (2020). https://doi.org/10.34740/KAGGLE/DSV/1183165

  24. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

Download references

Acknowledgments

This study has been partially supported by SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU and Sapienza University of Rome project EV2 (003 009 22).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Imran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Imran, M., Qureshi, H.K., Amerini, I. (2023). BHAC-MRI: Backdoor and Hybrid Attacks on MRI Brain Tumor Classification Using CNN. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14234. Springer, Cham. https://doi.org/10.1007/978-3-031-43153-1_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43153-1_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43152-4

  • Online ISBN: 978-3-031-43153-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics