Skip to main content

BiasPruner: Debiased Continual Learning for Medical Image Classification

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Abstract

Continual Learning (CL) is crucial for enabling networks to dynamically adapt as they learn new tasks sequentially, accommodating new data and classes without catastrophic forgetting. Diverging from conventional perspectives on CL, our paper introduces a new perspective wherein forgetting could actually benefit the sequential learning paradigm. Specifically, we present BiasPruner, a CL framework that intentionally forgets spurious correlations in the training data that could lead to shortcut learning. Utilizing a new bias score that measures the contribution of each unit in the network to learning spurious features, BiasPruner prunes those units with the highest bias scores to form a debiased subnetwork preserved for a given task. As BiasPruner learns a new task, it constructs a new debiased subnetwork, potentially incorporating units from previous subnetworks, which improves adaptation and performance on the new task. During inference, BiasPruner employs a simple task-agnostic approach to select the best debiased subnetwork for predictions. We conduct experiments on three medical datasets for skin lesion classification and chest X-Ray classification and demonstrate that BiasPruner consistently outperforms SOTA CL methods in terms of classification performance and fairness. Our code is available here.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bayasi, N., Du, S., Hamarneh, G., Garbi, R.: Continual-GEN: continual group ensembling for domain-agnostic skin lesion classification. In: Celebi, M.E., et al. (eds.) International Conference on Medical Image Computing and Computer-Assisted Intervention. LNCS, vol. 14393, pp. 3–13. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-47401-9_1

  2. Bayasi, N., Hamarneh, G., Garbi, R.: Culprit-Prune-Net: efficient continual sequential multi-domain learning with application to skin lesion classification. In: de Bruijne, M., et al. (eds.) Medical Image Computing and Computer Assisted Intervention (MICCAI). LNCS, vol. 12907, pp. 165–175. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87234-2_16

  3. Bissoto, A., Barata, C., Valle, E., Avila, S.: Even small correlation and diversity shifts pose dataset-bias issues. Pattern Recogn. Lett. 179, 87–93 (2024)

    Article  Google Scholar 

  4. Brown, A., Tomasev, N., Freyberg, J., Liu, Y., Karthikesalingam, A., Schrouff, J.: Detecting shortcut learning for fair medical AI using shortcut testing. Nat. Commun. 14(1), 4314 (2023)

    Article  Google Scholar 

  5. Busch, F.P., Kamath, R., Mitchell, R., Stammer, W., Kersting, K., Mundt, M.: Where is the truth? The risk of getting confounded in a continual world. arXiv preprint arXiv:2402.06434 (2024)

  6. Chowdhury, S.B.R., Chaturvedi, S.: Sustaining fairness via incremental learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 6797–6805 (2023)

    Google Scholar 

  7. Dekhovich, A., Tax, D.M., Sluiter, M.H., Bessa, M.A.: Continual prune-and-select: class-incremental learning with specialized subnetworks. Appl. Intell. 53(14), 17849–17864 (2023)

    Article  Google Scholar 

  8. Du, S., Hers, B., Bayasi, N., Hamarneh, G., Garbi, R.: FairDisCo: fairer AI in dermatology via disentanglement contrastive learning. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) Computer Vision – ECCV 2022 Workshops. ECCV 2022. LNCS, vol. 13804, pp. 185–202. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-25069-9_13

  9. González, C., Ranem, A., Othman, A., Mukhopadhyay, A.: Task-agnostic continual hippocampus segmentation for smooth population shifts. In: Kamnitsas, K., et al. (eds.) Domain Adaptation and Representation Transfer, DART 2022. LNCS, vol. 13542, pp. 108–118. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16852-9_11

  10. Groh, M., et al.: Evaluating deep neural networks trained on clinical images in dermatology with the Fitzpatrick 17k dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1820–1828 (2021)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  12. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  13. Kiyasseh, D., Zhu, T., Clifton, D.: A clinical deep learning framework for continually learning from cardiac signals across diseases, time, modalities, and institutions. Nat. Commun. 12(1), 4221 (2021)

    Article  Google Scholar 

  14. Lee, D., Jung, S., Moon, T.: Continual learning in the presence of spurious correlation. arXiv preprint arXiv:2303.11863 (2023)

  15. Lenga, M., Schulz, H., Saalbach, A.: Continual learning for domain adaptation in chest X-ray classification. In: Medical Imaging with Deep Learning, pp. 413–423 (2020)

    Google Scholar 

  16. Lesort, T.: Spurious features in continual learning. In: AAAI Bridge Program on Continual Causality, pp. 59–62 (2023)

    Google Scholar 

  17. Lewandowsky, S., Li, S.C.: Catastrophic interference in neural networks: causes, solutions, and data. In: Interference and Inhibition in Cognition, pp. 329–361 (1995)

    Google Scholar 

  18. Lin, X., Kim, S., Joo, J.: FairGrape: fairness-aware gradient pruning method for face attribute classification. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13673, pp. 414–432. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19778-9_24

  19. Luo, L., Xu, D., Chen, H., Wong, T.T., Heng, P.A.: Pseudo bias-balanced learning for debiased chest X-ray classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 621–631 (2022)

    Google Scholar 

  20. Mallya, A., Lazebnik, S.: PackNet: adding multiple tasks to a single network by iterative pruning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7765–7773 (2018)

    Google Scholar 

  21. Nam, J., Cha, H., Ahn, S., Lee, J., Shin, J.: Learning from failure: de-biasing classifier from biased classifier. Adv. Neural. Inf. Process. Syst. 33, 20673–20684 (2020)

    Google Scholar 

  22. Perkonigg, M., et al.: Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging. Nat. Commun. 12(1), 5678 (2021)

    Article  Google Scholar 

  23. Salman, H., Jain, S., Ilyas, A., Engstrom, L., Wong, E., Madry, A.: When does bias transfer in transfer learning? arXiv preprint arXiv:2207.02842 (2022)

  24. Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5(1), 1–9 (2018)

    Article  Google Scholar 

  25. Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learning: theory, method and application. arXiv preprint arXiv:2302.00487 (2023)

  26. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-Ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017)

    Google Scholar 

  27. Wang, Z., Yang, E., Shen, L., Huang, H.: A comprehensive survey of forgetting in deep learning beyond continual learning. arXiv preprint arXiv:2307.09218 (2023)

  28. Wortsman, M., et al.: Supermasks in superposition. Adv. Neural. Inf. Process. Syst. 33, 15173–15184 (2020)

    Google Scholar 

  29. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

Download references

Acknowledgments

We thank NVIDIA for their hardware grant and the Natural Sciences and Engineering Research Council (NSERC) of Canada for the Vanier PhD Fellowship. A. Bissoto is funded by FAPESP (2019/19619-7, 2022/ 09606-8).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nourhan Bayasi .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1146 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bayasi, N., Fayyad, J., Bissoto, A., Hamarneh, G., Garbi, R. (2024). BiasPruner: Debiased Continual Learning for Medical Image Classification. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15010. Springer, Cham. https://doi.org/10.1007/978-3-031-72117-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72117-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72116-8

  • Online ISBN: 978-3-031-72117-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics