Skip to main content

Fair and Private CT Contrast Agent Detection

  • Conference paper
  • First Online:
Ethics and Fairness in Medical Imaging (FAIMI 2024, EPIMI 2024)

Abstract

Intravenous (IV) contrast agents are an established medical tool to enhance the visibility of certain structures. However, their application substantially changes the appearance of Computed Tomography (CT) images, which - if unknown - can significantly deteriorate the diagnostic performance of neural networks. Artificial Intelligence (AI) can help to detect IV contrast, reducing the need for labour-intensive and error-prone manual labeling. However, we demonstrate that automated contrast detection can lead to discrimination against demographic subgroups. Moreover, it has been shown repeatedly that AI models can leak private training data. In this work, we analyse the fairness of conventional and privacy-preserving AI models during the detection of IV contrast on CT images. Specifically, we present models which are substantially fairer compared to a previously published baseline. For better comparability, we extend existing metrics to quantify the fairness of a model on a protected attribute in a single value. We provide a model, fulfilling a strict Differential Privacy protection of \((\varepsilon , \delta ) = (8, 2.8\cdot 10^{-3})\), which with an accuracy of \(97.42\%\) performs \(5\%\)-points better than the baseline. Additionally, while confirming prior works, that strict privacy preservation increases the discrimination against underrepresented subgroups, the proposed model is fairer than the baseline over all metrics considering race and sex as protected attributes, which extends to age for a more relaxed privacy guarantee.

P. Kaess and A. Ziller—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. ACM (2016). https://doi.org/10.1145/2976749.2978318

  2. Best, T.D., et al.: Multilevel body composition analysis on chest computed tomography predicts hospital length of stay and complications after lobectomy for lung cancer: a multicenter study. Ann. Surg. 275(5), e708–e715 (2022). https://doi.org/10.1097/SLA.0000000000004040. epub 2020 Jul 8 PMID: 32773626

    Article  Google Scholar 

  3. Boenisch, F., Dziedzic, A., Schuster, R., Shamsabadi, A.S., Shumailov, I., Papernot, N.: When the curious abandon honesty: federated learning is not private. In: 2023 IEEE 8th European Symposium on Security and Privacy (EuroS &P), pp. 175–199. IEEE (2023)

    Google Scholar 

  4. Buzaglo, G., et al.: Deconstructing data reconstruction: multiclass, weight decay and general losses. In: Thirty-seventh Conference on Neural Information Processing Systems (2023)

    Google Scholar 

  5. Calders, T., Verwer, S.: Three naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Disc. 21(2), 277–292 (2010). https://doi.org/10.1007/s10618-010-0190-x

    Article  MathSciNet  Google Scholar 

  6. Carlini, N., et al.: Extracting training data from diffusion models. In: 32nd USENIX Security Symposium (USENIX Security 23), pp. 5253–5270 (2023)

    Google Scholar 

  7. Chicco, D., Jurman, G.: The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics 21, 1–13 (2020)

    Article  Google Scholar 

  8. Cohen, A., Nissim, K.: Towards formalizing the GDPR’s notion of singling out. Proc. Natl. Acad. Sci. 117(15), 8344–8352 (2020)

    Article  Google Scholar 

  9. Cummings, R., Gupta, V., Kimpara, D., Morgenstern, J.: On the compatibility of privacy and fairness, pp. 309-315. UMAP’19 Adjunct, Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3314183.3323847

  10. Dong, J., Roth, A., Su, W.J.: Gaussian differential privacy. J. R. Stat. Soc. Ser. B Stat Methodol. 84(1), 3–37 (2022)

    Article  MathSciNet  Google Scholar 

  11. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  12. Farrand, T., Mireshghallah, F., Singh, S., Trask, A.: Neither private nor fair: impact of data imbalance on utility and fairness in differential privacy (2020)

    Google Scholar 

  13. Feng, S., Tramèr, F.: Privacy backdoors: stealing data with corrupted pretrained models. In: International Conference on Machine Learning. PMLR (2024)

    Google Scholar 

  14. Fioretto, F., Tran, C., Hentenryck, P.V.: Decision making with differential privacy under a fairness lens. In: International Joint Conference on Artificial Intelligence (2021). https://api.semanticscholar.org/CorpusID:234742410

  15. Fowl, L., Geiping, J., Czaja, W., Goldblum, M., Goldstein, T.: Robbing the fed: directly obtaining private data in federated learning with modified models. In: Tenth International Conference on Learning Representations (2022)

    Google Scholar 

  16. Güld, M., et al.: Quality of DICOM header information for image categorization. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 4685 (2002). https://doi.org/10.1117/12.467017

  17. Haim, N., Vardi, G., Yehudai, G., Shamir, O., Irani, M.: Reconstructing training data from trained neural networks. Adv. Neural. Inf. Process. Syst. 35, 22911–22924 (2022)

    Google Scholar 

  18. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 3315–3323 (2016)

    Google Scholar 

  19. Hayes, J., Mahloujifar, S., Balle, B.: Bounding training data reconstruction in DP-SGD. In: Thirty-seventh Conference on Neural Information Processing Systems (2023)

    Google Scholar 

  20. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019)

    Article  Google Scholar 

  21. Klause, H., Ziller, A., Rueckert, D., Hammernik, K., Kaissis, G.: Differentially private training of residual networks with scale normalisation. In: Theory and Practice of Differential Privacy Workshop, ICML (2022)

    Google Scholar 

  22. Lartaud, P.J., Rouchaud, A., Rouet, j.m., Nempont, O., Boussel, L.: Spectral CT Based Training Dataset Generation and Augmentation for Conventional CT Vascular Segmentation, pp. 768–775 (10 2019). https://doi.org/10.1007/978-3-030-32245-8_85

  23. Massachusetts life sciences center: computational resources and services. https://www.masslifesciences.com/

  24. Matthews, B.W.: Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochim et Biophys. Acta (BBA)-Protein Structure 405(2), 442–451 (1975)

    Google Scholar 

  25. Nasr, M., Songi, S., Thakurta, A., Papernot, N., Carlini, N.: Adversary instantiation: lower bounds for differentially private machine learning. In: 2021 IEEE Symposium on security and privacy (SP), pp. 866–882. IEEE (2021)

    Google Scholar 

  26. Sanyal, A., Hu, Y., Yang, F.: How unfair is private learning? In: Cussens, J., Zhang, K. (eds.) Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence. Proceedings of Machine Learning Research, vol. 180, pp. 1738–1748. PMLR (8 2022). https://proceedings.mlr.press/v180/sanyal22a.html

  27. Seyyed-Kalantari, L., Zhang, H., McDermott, M.B., Chen, I.Y., Ghassemi, M.: Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 27(12), 2176–2182 (2021)

    Article  Google Scholar 

  28. Sofka, M., et al.: Automatic contrast phase estimation in CT volumes. In: Fichtinger, G., Martel, A., Peters, T. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2011, pp. 166–174. Springer Berlin Heidelberg, Berlin, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23626-6_21

    Chapter  Google Scholar 

  29. Tayebi Arasteh, S., et al.: Preserving fairness and diagnostic accuracy in private large-scale ai models for medical imaging. Commun. Med. 4(1) (Mar 2024). https://doi.org/10.1038/s43856-024-00462-6. http://dx.doi.org/10.1038/s43856-024-00462-6

  30. Ye, Z et al.: Deep learning-based detection of intravenous contrast enhancement on CT scans. Radiol. Artif. Intell. 4(3), e210285 (2022). https://doi.org/10.1148/ryai.210285

  31. Ziller, A., et al.: Reconciling privacy and accuracy in AI for medical imaging. Nat. Mach. Intell. 1–11 (2024)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the German Ministry of Education and Research (BMBF) under grant number 01ZZ2316C (PrivateAIM). All models were trained using computational resources and services provided by the Massachusetts Life Sciences Center [23].

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Philipp Kaess or Alexander Ziller .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaess, P., Ziller, A., Mantz, L., Rueckert, D., Fintelmann, F.J., Kaissis, G. (2025). Fair and Private CT Contrast Agent Detection. In: Puyol-Antón, E., et al. Ethics and Fairness in Medical Imaging. FAIMI EPIMI 2024 2024. Lecture Notes in Computer Science, vol 15198. Springer, Cham. https://doi.org/10.1007/978-3-031-72787-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72787-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72786-3

  • Online ISBN: 978-3-031-72787-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics