Skip to main content

Continual Learning of New Diseases with Dual Distillation and Ensemble Strategy

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12261))

Abstract

Most intelligent diagnosis systems are developed for one or a few specific diseases, while medical specialists can diagnose all diseases of certain organ or tissue. Since it is often difficult to collect data of all diseases, it would be desirable if an intelligent system can initially diagnose a few diseases, and then continually learn to diagnose more and more diseases with coming data of these new classes in the future. However, current intelligent systems are characterised by catastrophic forgetting of old knowledge when learning new classes. In this paper, we propose a new continual learning framework to alleviate this issue by simultaneously distilling both old knowledge and recently learned new knowledge and by ensembling the class-specific knowledge from the previous classifier and the learned new classifier. Experiments showed that the proposed method outperforms state-of-the-art methods on multiple medical and natural image datasets.

Z. Li and C. Zhong—The authors contribute equally to this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: lifelong learning with a network of experts. In: Conference on Computer Vision and Pattern Recognition, pp. 3366–3375 (2017)

    Google Scholar 

  2. Ardila, D., et al.: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 25(6), 954–961 (2019)

    Article  Google Scholar 

  3. Baweja, C., Glocker, B., Kamnitsas, K.: Towards continual learning in medical imaging. In: Medical Imaging Meets NIPS Workshop (2018)

    Google Scholar 

  4. Castro, F.M., Marín-Jiménez, M.J., Guil, N., Schmid, C., Alahari, K.: End-to-end incremental learning. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216, pp. 241–257. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01258-8_15

    Chapter  Google Scholar 

  5. Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)

    Article  Google Scholar 

  6. Fernando, C., et al.: Pathnet: evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734 (2017)

  7. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999)

    Article  Google Scholar 

  8. Goodfellow, I.J., Mirza, M., Da Xiao, A.C., Bengio, Y.: An empirical investigation of catastrophic forgeting in gradient-based neural networks. In: International Conference on Learning Representations (2014)

    Google Scholar 

  9. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015)

    Google Scholar 

  10. Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Lifelong learning via progressive distillation and retrospection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 452–467. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_27

    Chapter  Google Scholar 

  11. Isele, D., Cosgun, A.: Selective experience replay for lifelong learning. In: AAAI Conference on Artificial Intelligence, pp. 3302–3309 (2018)

    Google Scholar 

  12. Karani, N., Chaitanya, K., Baumgartner, C., Konukoglu, E.: A lifelong learning approach to brain MR segmentation across scanners and protocols. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 476–484. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_54

    Chapter  Google Scholar 

  13. Kemker, R., McClure, M., Abitino, A., Hayes, T.L., Kanan, C.: Measuring catastrophic forgetting in neural networks. In: AAAI Conference on Artificial Intelligence, pp. 3390–3398 (2018)

    Google Scholar 

  14. Kim, H.E., Kim, S., Lee, J.: Keep and learn: continual learning by constraining the latent space for knowledge preservation in neural networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 520–528 (2018)

    Google Scholar 

  15. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Nat. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  16. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2017)

    Article  Google Scholar 

  17. Mallya, A., Lazebnik, S.: Packnet: adding multiple tasks to a single network by iterative pruning. In: Conference on Computer Vision and Pattern Recognition, pp. 7765–7773 (2018)

    Google Scholar 

  18. McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020)

    Article  Google Scholar 

  19. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)

    Google Scholar 

  20. Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)

  21. Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 (2018)

    Article  Google Scholar 

  22. Wu, Y., et al.: Large scale incremental learning. In: Conference on Computer Vision and Pattern Recognition, pp. 374–382 (2019)

    Google Scholar 

  23. Xiang, Y., Fu, Y., Ji, P., Huang, H.: Incremental learning using conditional adversarial networks. In: International Conference on Computer Vision, pp. 6619–6628 (2019)

    Google Scholar 

  24. Xu, J., Zhu, Z.: Reinforced continual learning. In: Advances in Neural Information Processing Systems, pp. 899–908 (2018)

    Google Scholar 

  25. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  26. Zhai, M., Chen, L., Tung, F., He, J., Nawhal, M., Mori, G.: Lifelong GAN: continual learning for conditional image generation. In: International Conference on Computer Vision, pp. 2759–2768 (2019)

    Google Scholar 

Download references

Acknowledgement

This work is supported in part by the National Key Research and Development Program (grant No. 2018YFC1315402), the Guangdong Key Research and Development Program (grant No. 2019B020228001), the National Natural Science Foundation of China (grant No. U1811461), and the Guangzhou Science and Technology Program (grant No. 201904010260).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruixuan Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Z., Zhong, C., Wang, R., Zheng, WS. (2020). Continual Learning of New Diseases with Dual Distillation and Ensemble Strategy. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59710-8_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59709-2

  • Online ISBN: 978-3-030-59710-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics