Skip to main content

Inject Backdoor in Measured Data to Jeopardize Full-Stack Medical Image Analysis System

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15007))

  • 1686 Accesses

Abstract

Deep learning has achieved remarkable success in the medical domain, which makes it crucial to assess its vulnerabilities in medical systems. This study examines backdoor attack (BA) methods to evaluate the reliability and security of medical image analysis systems. However, most BA methods focus on isolated downstream tasks and are considered post-imaging attacks, missing a comprehensive security assessment of the full-stack medical image analysis systems from data acquisition to analysis. Reconstructing images from measured data for downstream tasks requires complex transformations, which challenge the design of triggers in the measurement domain. Typically, hackers only access measured data in scanners. To tackle this challenge, this paper introduces a novel Learnable Trigger Generation Method (LTGM) for measured data. This pre-imaging attack method aims to attack the downstream task without compromising the reconstruction process or imaging quality. LTGM employs a trigger function in the measurement domain to inject a learned trigger into the measured data. To avoid the bias from handcrafted knowledge, this trigger is formulated by learning from the gradients of two key tasks: reconstruction and analysis. Crucially, LTGM’s trigger strives to balance its impact on analysis with minimal additional noise and artifacts in the reconstructed images by carefully analyzing gradients from both tasks. Comprehensive experiments have been conducted to demonstrate the vulnerabilities in full-stack medical systems and to validate the effectiveness of the proposed method using the public dataset. Our code is available at https://github.com/Deep-Imaging-Group/LTGM.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Afshar, P., et al.: Human-level Covid-19 diagnosis from low-dose CT scans using a two-stage time-distributed capsule network. Sci. Rep. 12(1), 4827 (2022)

    Article  Google Scholar 

  2. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  3. Dhar, T., Dey, N., Borra, S., Sherratt, R.S.: Challenges of deep learning in medical image analysis-improving explainability and trust. IEEE Trans. Technol. Soc. 4(1), 68–75 (2023)

    Article  Google Scholar 

  4. Ding, Y., et al.: Backdoor attack on deep learning-based medical image encryption and decryption network. IEEE Trans. Inf. Forensics Secur. 19, 280–292 (2024)

    Article  Google Scholar 

  5. Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D.: FIBA: frequency-injection based backdoor attack in medical image analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20876–20885 (2022)

    Google Scholar 

  6. Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019)

    Article  Google Scholar 

  7. Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)

    Article  Google Scholar 

  8. Jin, R., Li, X.: Backdoor attack and defense in federated generative adversarial network-based medical image synthesis. Med. Image Anal. 90, 102965 (2023)

    Article  Google Scholar 

  9. Kaviani, S., Han, K.J., Sohn, I.: Adversarial attacks and defenses on AI in medical imaging informatics: a survey. Expert Syst. Appl. 198, 116815 (2022)

    Article  Google Scholar 

  10. Li, Y., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  11. Shan, H., et al.: Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction. Nat. Mach. Intell. 1(6), 269–276 (2019)

    Article  Google Scholar 

  12. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  13. Wen, L., et al.: Dosetransformer: a transformer embedded model with transfer learning for radiotherapy dose prediction of cervical cancer. IEEE Trans. Radiation Plasma Med. Sci. (2023)

    Google Scholar 

  14. Xia, W., et al.: Magic: manifold and graph integrative convolutional network for low-dose CT reconstruction. IEEE Trans. Med. Imaging 40(12), 3459–3472 (2021)

    Article  Google Scholar 

  15. Xia, W., Shan, H., Wang, G., Zhang, Y.: Physics-/model-based and data-driven methods for low-dose computed tomography: a survey. IEEE Signal Process. Mag. 40(2), 89–100 (2023)

    Article  Google Scholar 

  16. Yang, Z., et al.: Dynamic corrected split federated learning with homomorphic encryption for u-shaped medical image networks. IEEE J. Biomed. Health Inform. 27(12), 5946–5957 (2023)

    Article  Google Scholar 

  17. Yang, Z., Leng, L., Teoh, A.B.J., Zhang, B., Zhang, Y.: Cross-database attack of different coding-based palmprint templates. Knowl.-Based Syst. 264, 110310 (2023)

    Article  Google Scholar 

  18. Yang, Z., Xia, W., Lu, Z., Chen, Y., Li, X., Zhang, Y.: Hypernetwork-based physics-driven personalized federated learning for CT imaging. IEEE Trans. Neural Netw. Learn. Syst. (2023)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 62271335; in part by the Sichuan Science and Technology Program under Grant 2021JDJQ0024; in part by the Sichuan University “From 0 to 1” Innovative Research Program under Grant 2022SCUH0016; and in part by China Scholarship Council under Grant 202306240017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Zhang .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Z., Chen, Y., Sun, M., Zhang, Y. (2024). Inject Backdoor in Measured Data to Jeopardize Full-Stack Medical Image Analysis System. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15007. Springer, Cham. https://doi.org/10.1007/978-3-031-72104-5_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72104-5_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72103-8

  • Online ISBN: 978-3-031-72104-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics