Skip to main content

Abstract

The prevalence of machine learning in biomedical research is rapidly growing, yet the trustworthiness of such research is often overlooked. While some previous works have investigated the ability of adversarial attacks to degrade model performance in medical imaging, the ability to falsely improve performance via recently-developed “enhancement attacks” may be a greater threat to biomedical machine learning. In the spirit of developing attacks to better understand trustworthiness, we developed two techniques to drastically enhance prediction performance of classifiers with minimal changes to features: 1) general enhancement of prediction performance, and 2) enhancement of a particular method over another. Our enhancement framework falsely improved classifiers’ accuracy from 50% to almost 100% while maintaining high feature similarities between original and enhanced data (Pearson’s \(r's>0.99\)). Similarly, the method-specific enhancement framework was effective in falsely improving the performance of one method over another. For example, a simple neural network outperformed logistic regression by 17% on our enhanced dataset, although no performance differences were present in the original dataset. Crucially, the original and enhanced data were still similar (\(r=0.99\)). Our results demonstrate the feasibility of minor data manipulations to achieve any desired prediction performance, which presents an interesting ethical challenge for the future of biomedical machine learning. These findings emphasize the need for more robust data provenance tracking and other precautionary measures to ensure the integrity of biomedical machine learning research. The code is available at https://github.com/mattrosenblatt7/enhancement_EPIMI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al-Marzouki, S., Evans, S., Marshall, T., Roberts, I.: Are these data real? Statistical methods for the detection of data fabrication in clinical trials. BMJ 331(7511), 267–270 (2005)

    Article  Google Scholar 

  2. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  3. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines (Jun 2012)

    Google Scholar 

  4. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognit. 84, 317–331 (2018)

    Article  Google Scholar 

  5. Bik, E.M., Casadevall, A., Fang, F.C.: The prevalence of inappropriate image duplication in biomedical research publications. MBio 7(3), 10–1128 (2016)

    Article  Google Scholar 

  6. Bortsova, G., et al.: Adversarial attack vulnerability of medical image analysis systems: unexplored factors. Med. Image Anal. 73, 102141 (2021)

    Article  Google Scholar 

  7. Cinà, A.E., et al.: Wild patterns reloaded: a survey of machine learning security against training data poisoning (May 2022)

    Google Scholar 

  8. Demontis, A., et al.: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: USENIX Security Symposium 2019, pp. 321–338 (2019)

    Google Scholar 

  9. Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D.: FIBA: frequency-Injection based backdoor attack in medical image analysis. arXiv preprint arXiv:2112.01148 (2021)

  10. Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L., Kohane, I.S.: Adversarial attacks on medical machine learning. Science 363(6433), 1287–1289 (2019)

    Article  Google Scholar 

  11. Finlayson, S.G., Chung, H.W., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems. arXiv preprint arXiv:1804.05296 (2018)

  12. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (Dec 2014)

    Google Scholar 

  13. Halchenko, Y., et al.: DataLad: distributed system for joint management of code, data, and their relationship. J. Open Source Softw. 6(63), 3262 (2021)

    Google Scholar 

  14. Hunter, J.D.: Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9(03), 90–95 (2007)

    Article  Google Scholar 

  15. Kawahara, J., et al.: BrainNetCNN: convolutional neural networks for brain networks; towards predicting neurodevelopment. Neuroimage 146, 1038–1049 (2017)

    Article  Google Scholar 

  16. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  17. Ma, X., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognit. 110, 107332 (2021)

    Article  Google Scholar 

  18. Matsuo, Y., Takemoto, K.: Backdoor attacks to deep neural Network-Based system for COVID-19 detection from chest x-ray images. NATO Adv. Sci. Inst. Ser. E Appl. Sci. 11(20), 9556 (2021)

    Google Scholar 

  19. Muñoz-González, et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 27–38 (Nov 2017)

    Google Scholar 

  20. Nwadike, M., Miyawaki, T., Sarkar, E., Maniatakos, M., Shamout, F.: Explainability matters: backdoor attacks on medical imaging. arXiv preprint arXiv:2101.00008 (2020)

  21. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Piller, C.: Blots on a field? Science (New York, NY) 377(6604), 358–363 (2022)

    Article  Google Scholar 

  23. Poldrack, R.A., et al.: A phenome-wide examination of neural and cognitive function. Sci. Data 3, 160110 (2016)

    Article  Google Scholar 

  24. Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Proc. Est. Acad. Sci. Eng. 6(3), 346–360 (2020)

    Google Scholar 

  25. Rosenblatt, M., et al.: Connectome-based machine learning models are vulnerable to subtle data manipulations. Patterns 4(7), 100756 (2023)

    Article  Google Scholar 

  26. Rosenblatt, M., Scheinost, D.: Data poisoning attack and defenses in Connectome-Based predictive models. In: Workshop on the Ethical and Philosophical Issues in Medical Imaging,Multimodal Learning and Fusion Across Scales for Clinical Decision Support, and Topological Data Analysis for Biomedical Imaging. EPIMI ML-CDS TDA4BiomedicalImaging 2022 2022 2022. Lecture Notes in Computer Science, vol.13755, pp. 3–13. Springer Nature Switzerland (2022). https://doi.org/10.1007/978-3-031-23223-7_1

  27. Satterthwaite, T.D., et al.: The Philadelphia neurodevelopmental cohort: a publicly available resource for the study of normal and abnormal brain development in youth. Neuroimage 124(Pt B), 1115–1119 (2016)

    Article  Google Scholar 

  28. Satterthwaite, T.D., et al.: Neuroimaging of the Philadelphia neurodevelopmental cohort. Neuroimage 86, 544–553 (2014)

    Article  Google Scholar 

  29. Shen, X., Tokoglu, F., Papademetris, X., Constable, R.T.: Groupwise whole-brain parcellation from resting-state fMRI data for network node identification. Neuroimage 82, 403–415 (2013)

    Article  Google Scholar 

  30. Szegedy, C., et al.: Intriguing properties of neural networks (Dec 2013)

    Google Scholar 

  31. Waskom, M.: seaborn: statistical data visualization. J. Open Source Softw. 6(60), 3021 (2021)

    Article  Google Scholar 

  32. Yao, Q., He, Z., Lin, Y., Ma, K., Zheng, Y., Zhou, S.K.: A hierarchical feature constraint to camouflage medical adversarial attacks. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 36–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_4

    Chapter  Google Scholar 

Download references

Data use declaration and acknowledgment

The UCLA Consortium for Neuropsychiatric Phenomics (download: https://openneuro.org/datasets/ds000030/versions/00016) and the Philadelphia Neurodevelopmental Cohort (dbGaP Study Accession: phs000607.v1.p1) are public datasets that obtained consent from participants and supervision from ethical review boards. We have local human research approval for using these datasets.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rosenblatt, M., Dadashkarimi, J., Scheinost, D. (2023). Gradient-Based Enhancement Attacks in Biomedical Machine Learning. In: Wesarg, S., et al. Clinical Image-Based Procedures, Fairness of AI in Medical Imaging, and Ethical and Philosophical Issues in Medical Imaging. CLIP EPIMI FAIMI 2023 2023 2023. Lecture Notes in Computer Science, vol 14242. Springer, Cham. https://doi.org/10.1007/978-3-031-45249-9_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45249-9_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45248-2

  • Online ISBN: 978-3-031-45249-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics