Abstract
Deep learning (DL) models have demonstrated impressive performance in sensitive medical applications such as disease diagnosis. However, a backdoor attack embedded in the clean dataset without poisoning the labels poses a severe threat to the integrity of Artificial Intelligence (AI) technology. In literature, a lot of work has been done on backdoor attacks for medical applications, in which most of the authors assumed that the labels of the samples are also poisoned. This compromises the elusiveness of the backdoor attacks because poisoned samples can be identified by visual inspection by finding the mismatch between the labels of the samples. In this paper, an elusive backdoor attack is proposed, that makes the poisoned samples difficult to recognize. In the proposed approach a backdoor signal superimposed into a small portion of the clean dataset during training time is proposed. Moreover, this paper proposes a hybrid attack that further increases the Attack Success Rate (ASR). The proposed approach is evaluated over a Convolutional Neural Network (CNN)-based system for Magnetic Resonance Imaging (MRI) brain tumor classification, which demonstrated the effectiveness of the attacks, thus raising concern regarding using AI in sensitive applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Garg, N., Ashrith, K.S., Parveen, G.S., Sai, K.G., Chintamaneni, A., Hasan, F.: Self-driving car to drive autonomously using image processing and deep learning. Int. J. Res. Eng. Sci. Manage. 5(1), 125–132 (2022)
Chandana, V.S., Vasavi, S.: Autonomous drones based forest surveillance using Faster R-CNN. In: 2022 International Conference on Electronics and Renewable Systems (ICEARS), pp. 1718–1723. IEEE, March 2022
Hassan, M.R., et al.: Prostate cancer classification from ultrasound and MRI images using deep learning based Explainable Artificial Intelligence. Futur. Gener. Comput. Syst. 127, 462–472 (2022)
Hirano, H., Minagi, A., Takemoto, K.: Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 21(1), 1–13 (2021)
Kwon, H., Kim, Y.: BlindNet backdoor: attack on deep neural network using blind watermark. Multimedia Tools Appl. 81(5), 6217–6234 (2022)
Joel, M.Z., et al.: Adversarial attack vulnerability of deep learning models for oncologic images. medRxiv, January 2021
Yang, C., Wu, Q., Li, H., Chen, Y.: Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340 (2017)
Liao, C., Zhong, H., Squicciarini, A., Zhu, S., Miller, D.: Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307 (2018)
Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)
Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 101–105. IEEE, September 2019
Turner, A., Tsipras, D., Madry, A.: Clean-label backdoor attacks (2018)
Xiao, H., Xiao, H., Eckert, C.: Adversarial label flips attack on support vector machines. In: ECAI 2012, pp. 870–875. IOS Press (2012)
Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., Roli, F.: Support vector machines under adversarial label contamination. Neurocomputing 160, 53–62 (2015)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894. PMLR, July 2017
Mei, S., Zhu, X.: Using machine teaching to identify optimal training-set attacks on machine learners. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, February 2015
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)
Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Nwadike, M., Miyawaki, T., Sarkar, E., Maniatakos, M., Shamout, F.: Explainability matters: backdoor attacks on medical imaging. arXiv preprint arXiv:2101.00008 (2020)
Matsuo, Y., Takemoto, K.: Backdoor attacks to deep neural network-based system for COVID-19 detection from chest X-ray images. Appl. Sci. 11(20), 9556 (2021)
Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. Robustness: investigating medical imaging networks using adversarial examples. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 493–501. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_56
Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D.: FIBA: frequency-injection based backdoor attack in medical image analysis. arXiv preprint arXiv:2112.01148 (2021)
Wang, S., Nepal, S., Rudolph, C., Grobler, M., Chen, S., Chen, T.: Backdoor attacks against transfer learning with pre-trained deep learning models. IEEE Trans. Serv. Comput. 15(3), 1526–1539 (2020)
Bhuvaji, S., Kadam, A., Bhumkar, P., Dedge, S., Kanchan, S.: Brain Tumor Classification (MRI), [Dataset]. Kaggle (2020). https://doi.org/10.34740/KAGGLE/DSV/1183165
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Acknowledgments
This study has been partially supported by SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU and Sapienza University of Rome project EV2 (003 009 22).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Imran, M., Qureshi, H.K., Amerini, I. (2023). BHAC-MRI: Backdoor and Hybrid Attacks on MRI Brain Tumor Classification Using CNN. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14234. Springer, Cham. https://doi.org/10.1007/978-3-031-43153-1_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-43153-1_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43152-4
Online ISBN: 978-3-031-43153-1
eBook Packages: Computer ScienceComputer Science (R0)