Abstract
Deep learning has achieved impressive performance across various medical imaging tasks. However, its inherent bias against specific groups hinders its clinical applicability in equitable healthcare systems. A recently discovered phenomenon, Neural Collapse (NC), has shown potential in improving the generalization of state-of-the-art deep learning models. Nonetheless, its implications on bias in medical imaging remain unexplored. Our study investigates deep learning fairness through the lens of NC. We analyze the training dynamics of models as they approach NC when training using biased datasets, and examine the subsequent impact on test performance, specifically focusing on label bias. We find that biased training initially results in different NC configurations across subgroups, before converging to a final NC solution by memorizing all data samples. Through extensive experiments on three medical imaging datasets-PAPILA, HAM10000, and CheXpert-we find that in biased settings, NC can lead to a significant drop in F1 score across all subgroups. Our code is available at https://gitlab.com/radiology/neuro/neural-collapse-fairness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, R.J., et al.: Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 7(6), 719–742 (2023)
Glocker, B., Jones, C., Bernhardt, M., Winzeck, S.: Algorithmic encoding of protected characteristics in chest X-ray disease detection models. Ebiomedicine 89 (2023)
Groh, M., Harris, C., Daneshjou, R., Badri, O., Koochek, A.: Towards transparency in dermatology image datasets with skin tone annotations by experts, crowds, and an algorithm. Proc. ACM Hum.-Comput. Interact. 6(CSCW2), 1–26 (2022)
Hui, L., Belkin, M., Nakkiran, P.: Limitations of neural collapse for understanding generalization in deep learning. arXiv preprint arXiv:2202.08384 (2022)
Irvin, J., et al.: Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 590–597 (2019)
Jones, C., Roschewitz, M., Glocker, B.: The role of subgroup separability in group-fair medical image classification. In: Greenspan, H., et al. (eds.) MICCAI 2023. LNCS, vol. 14222, pp. 179–188. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-43898-1_18
Kelly, C.J., Karthikesalingam, A., Suleyman, M., Corrado, G., King, D.: Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 17, 1–9 (2019)
Kothapalli, V., Rasromani, E., Awatramani, V.: Neural collapse: a review on modelling principles and generalization. arXiv preprint arXiv:2206.04041 (2022)
Kovalyk, O., Morales-Sánchez, J., Verdú-Monedero, R., Sellés-Navarro, I., Palazón-Cabanes, A., Sancho-Gómez, J.L.: PAPILA: dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment. Sci. Data 9(1), 291 (2022)
Li, Z., Shang, X., He, R., Lin, T., Wu, C.: No fear of classifier biases: neural collapse inspired federated learning with synthetic and fixed classifier. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5319–5329 (2023)
Liu, S., Niles-Weed, J., Razavian, N., Fernandez-Granda, C.: Early-learning regularization prevents memorization of noisy labels. In: Advances in Neural Information Processing Systems, vol. 33, pp. 20331–20342 (2020)
Lu, Y., Ji, W., Izzo, Z., Ying, L.: Importance tempering: group robustness for overparameterized models. arXiv preprint arXiv:2209.08745 (2022)
Mbakwe, A.B., Lourentzou, I., Celi, L.A., Wu, J.T.: Fairness metrics for health AI: we have a long way to go. Ebiomedicine 90 (2023)
Mehta, R., Shui, C., Arbel, T.: Evaluating the fairness of deep learning uncertainty estimates in medical image analysis. In: Medical Imaging with Deep Learning, pp. 1453–1492. PMLR (2024)
Nguyen, D.A., Levie, R., Lienen, J., Hüllermeier, E., Kutyniok, G.: Memorization-dilation: modeling neural collapse under noise. In: The Eleventh International Conference on Learning Representations (2022)
Papyan, V., Han, X., Donoho, D.L.: Prevalence of neural collapse during the terminal phase of deep learning training. Proc. Natl. Acad. Sci. 117(40), 24652–24663 (2020)
Rajpurkar, P., et al.: Deep learning for chest radiograph diagnosis: a retrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 15(11), e1002686 (2018)
Súkeník, P., Mondelli, M., Lampert, C.: Deep neural collapse is provably optimal for the deep unconstrained features model. arXiv e-prints pp. arXiv-2305 (2023)
Tschandl, P., Rosendahl, C., Kittler, H.: The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5(1), 1–9 (2018)
Xie, L., Yang, Y., Cai, D., He, X.: Neural collapse inspired attraction-repulsion-balanced loss for imbalanced learning. Neurocomputing 527, 60–70 (2023)
Xu, Z., Zhao, S., Quan, Q., Yao, Q., Zhou, S.K.: FairAdaBN: mitigating unfairness with adaptive batch normalization and its application to dermatological disease classification. In: Greenspan, H., et al. (eds.) MICCAI 2023. LNCS, vol. 14221, pp. 307–317. Springer Nature Switzerland, Cham (2023). https://doi.org/10.1007/978-3-031-43895-0_29
Yuan, H., et al.: EdgeMixup: embarrassingly simple data alteration to improve Lyme disease lesion segmentation and diagnosis fairness. In: Greenspan, H., et al. (eds.) MICCAI 2023. LNCS, vol. 14223, pp. 374–384. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-43901-8_36
Zhu, D., et al.: Bridging the gap: neural collapse inspired prompt tuning for generalization under class imbalance. arXiv preprint arXiv:2306.15955 (2023)
Zong, Y., Yang, Y., Hospedales, T.: Medfair: benchmarking fairness for medical imaging. arXiv preprint arXiv:2210.01725 (2022)
Acknowledgments
This project is supported by a 2022 Erasmus MC Fellowship. Esther E. Bron is recipient of TAP-dementia, a ZonMw funded project (#10510032120003) in the context of the Dutch National Dementia Strategy. Esther E. Bron and Stefan Klein are recipients of EUCAIM, Cancer Image Europe, co-funded by the European Union under Grant Agreement 101100633. Marawan Elbatel is supported by the Hong Kong PhD Fellowship Scheme (HKPFS) from the Hong Kong Research Grants Council.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mouheb, K., Elbatel, M., Klein, S., Bron, E.E. (2024). Evaluating the Fairness of Neural Collapse in Medical Image Classification. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15010. Springer, Cham. https://doi.org/10.1007/978-3-031-72117-5_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-72117-5_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72116-8
Online ISBN: 978-3-031-72117-5
eBook Packages: Computer ScienceComputer Science (R0)