Skip to main content
Log in

Lesion-aware knowledge distillation for diabetic retinopathy lesion segmentation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Retinal fundus images have been widely utilized for screening Diabetic Retinopathy (DR). The lesion information contained in these images is indispensable for the diagnosis of DR. The acquisition of lesion information depends on the sophisticated lesion segmentation methods. Nevertheless, the existing lesion segmentation methods highly rely on huge computational complexity and massive storage, making it difficult to apply in real-world clinical scenarios. Knowledge distillation (KD) has become an essential tool to reduce the computational complexity of the network. However, the lesion regions being insignificant in fundus images, directly applying the current KD methods cannot adequately transfer sufficient lesion knowledge, which restricts the learning of knowledge distillation. In essence, the challenge is how to enhance the focus of the KD process on lesion regions and to transfer more comprehensive pathological knowledge to the student network. Considering the importance of lesion regions in fundus images and the global semantic relations among lesion regions across various fundus images, we propose a Lesion-aware Knowledge Distillation (LKD) framework focusing on the transfer of lesion knowledge. The key contribution of the proposed framework is the creation of lesion embedding queue from the global training samples, which facilitates the transfer of global pathology knowledge from the teacher network to the student network, thus promoting the acquisition of lesion-related knowledge. Furthermore, we propose a self-paced hard sample learning strategy for knowledge transfer of lesion embedding queue, which additionally improves the efficiency of knowledge transfer. We evaluate LKD on IDRiD and DDR benchmark datasets, the overall performance of the proposed mehtod improves 2.1% AUPR as well as 2.2% DICE and 1.5% AUPR as well as 2.2% DICE compared to the previous best results. A particular improvement of 2.3% and 3.2% in DICE are achieved on the IDRiD dataset for tiny lesions, i.e., MA and HE, respectively. Our code is available at https://github.com/YaqiWangCV/LKD.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability and Access

Data related to the current study are available from the corresponding author on reasonable request.

Notes

  1. https://idrid.grand-challenge.org

  2. https://github.com/nkicsl/DDR-dataset

References

  1. Beli E, Yan Y, Moldovan L, Vieira CP, Gao R, Duan Y, Prasad R, Bhatwadekar A, White FA, Townsend SD et al (2018) Restructuring of the gut microbiome by intermittent fasting prevents retinopathy and prolongs survival in db/db mice. Diabetes 67(9):1867–1879

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Kang Q, Yang C (2020) Oxidative stress and diabetic retinopathy: Molecular mechanisms, pathogenetic role and therapeutic implications. Redox Biology 37:101799

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Zhou Y, He X, Huang L, Liu L, Zhu F, Cui S, Shao L (2019) Collaborative learning of semi-supervised segmentation and classification for medical images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 2079–2088

  4. Sambyal N, Saini P, Syal R, Gupta V (2020) Modified u-net architecture for semantic segmentation of diabetic retinopathy images. Biocybern Biomed Eng 40(3):1094–1109

    Article  Google Scholar 

  5. Guo Y, Peng Y (2022) Carnet: Cascade attentive refinenet for multi-lesion segmentation of diabetic retinopathy images. Complex Intell Syst 8(2):1681–1701

    Article  Google Scholar 

  6. Huang S, Li J, Xiao Y, Shen N, Xu T (2022) Rtnet: Relation transformer network for diabetic retinopathy multi-lesion segmentation. IEEE Trans Med Imaging

  7. Wang H, Cao P, Yang J, Zaiane O (2023) Mca-unet: multi-scale cross co-attentional u-net for automatic medical image segmentation. Health Inf Sci Syst 11(1):10

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. He T, Shen C, Tian Z, Gong D, Sun C, Yan Y (2019) Knowledge adaptation for efficient semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 578–587

  9. Shu C, Liu Y, Gao J, Yan Z, Shen C (2021) Channel-wise knowledge distillation for dense prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 5311–5320

  10. Liu Y, Chen K, Liu C, Qin Z, Luo Z, Wang J (2019) Structured knowledge distillation for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 2604–2613

  11. Wang Y, Zhou W, Jiang T, Bai X, Xu Y (2020) Intra-class feature variation distillation for semantic segmentation. In: European Conference on Computer Vision, pp 346–362

  12. Park W, Kim D, Lu Y, Cho M (2019) Relational knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 3967–3976

  13. Yang C, Zhou H, An Z, Jiang X, Xu Y, Zhang Q (2022) Cross-image relational knowledge distillation for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 12319–12328

  14. Qin D, Bu J-J, Liu Z, Shen X, Zhou S, Gu J-J, Wang Z-H, Wu L, Dai H-F (2021) Efficient medical image segmentation based on knowledge distillation. IEEE Trans Med Imaging 40(12):3820–3831

    Article  PubMed  Google Scholar 

  15. Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J (2018) Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp 3–11

  16. Qin X, Zhang Z, Huang C, Dehghan M, Zaiane OR, Jagersand M (2020) U2-net: Going deeper with nested u-structure for salient object detection. Pattern Recognit 106:107404

    Article  Google Scholar 

  17. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B et al (2018) Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999

  18. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp 234–241

  19. Zheng R, Liu L, Zhang S, Zheng C, Bunyak F, Xu R, Li B, Sun M (2018) Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network. Biomed Opt Express 9(10):4863–4878

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  20. Li X, Jiang Y, Li M, Yin S (2021) Lightweight attention convolutional neural network for retinal vessel image segmentation. IEEE Trans Industr Inform 17(3):1958–1967

    Article  Google Scholar 

  21. Guo Y, Peng Y (2022) Multiple lesion segmentation in diabetic retinopathy with dual-input attentive refinenet. Appl Intell 52(12):14440–14464

    Article  Google Scholar 

  22. Paszke A, Chaurasia A, Kim S, Culurciello E (2016) Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147

  23. Hinton G, Vinyals O, Dean J et al (2015) Distilling the knowledge in a neural network 2(7). arXiv preprint arXiv:1503.02531

  24. Zhou H, Song L, Chen J, Zhou Y, Wang G, Yuan J, Zhang Q (2021) Rethinking soft labels for knowledge distillation: A bias-variance tradeoff perspective. arXiv preprint arXiv:2102.00650

  25. Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y (2014) Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550

  26. Heo B, Kim J, Yun S, Park H, Kwak N, Choi JY (2019) A comprehensive overhaul of feature distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 1921–1930

  27. Zagoruyko S, Komodakis N (2016) Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928

  28. Fang Z, Wang J, Wang L, Zhang L, Yang Y, Liu Z (2021) Seed: Self-supervised distillation for visual representation. arXiv preprint arXiv:2101.04731

  29. Peng B, Jin X, Liu J, Li D, Wu Y, Liu Y, Zhou S, Zhang Z (2019) Correlation congruence for knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 5007–5016

  30. Tung F, Mori G (2019) Similarity-preserving knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 1365–1374

  31. Yang C, An Z, Xu Y (2021) Multi-view contrastive learning for online knowledge distillation. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 3750–3754

  32. Wang C, Zhong J, Dai Q, Li R, Yu Q, Fang B (2023) Local structure consistency and pixel-correlation distillation for compact semantic segmentation. Appl Intell 53(6):6307–6323

    Article  Google Scholar 

  33. Sharma S, Lodhi SS, Chandra J (2023) Scl-ikd: intermediate knowledge distillation via supervised contrastive representation learning. Appl Intell 1–22

  34. Xie J, Shuai B, Hu J-F, Lin J, Zheng W-S (2018) Improving fast segmentation with teacher-student learning. arXiv preprint arXiv:1810.08476

  35. Liu Y, Shu C, Wang J, Shen C (2020) Structured knowledge distillation for dense prediction. IEEE transactions on pattern analysis and machine intelligence

  36. Li K, Yu L, Wang S, Heng P-A (2020) Towards cross-modality medical image segmentation with online mutual knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence 34:775–783

    Article  Google Scholar 

  37. Bengio Y, Louradour J, Collobert R, Weston J (2019) Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp 41–48

  38. Kumar M, Packer B, Koller D (2010) Self-paced learning for latent variable models. Adv Neural Inf Process Syst 23

  39. Supancic JS, Ramanan D (2013) Self-paced learning for long-term tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2379–2386

  40. Jiang L, Meng D, Mitamura T, Hauptmann AG (2014) Easy samples first: Self-paced reranking for zero-example multimedia search. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp 547–556

  41. Jiang L, Meng D, Yu S-I, Lan Z, Shan S, Hauptmann A (2014) Self-paced learning with diversity. Adv Neural Inf Process Syst 27

  42. Jiang L, Meng D, Zhao Q, Shan S, Hauptmann A (2015) Self-paced curriculum learning. In: Proceedings of the AAAI Conference on Artificial Intelligence 29

  43. Dong X, Zheng L, Ma F, Yang Y, Meng D (2018) Few-example object detection with model communication. IEEE Trans Pattern Anal Mach Intell 41(7):1641–1654

    Article  PubMed  Google Scholar 

  44. Ma F, Meng D, Xie Q, Li Z, Dong X (2017) Self-paced co-training. In: International Conference on Machine Learning, pp 2275–2284

  45. Ma F, Meng D, Dong X, Yang Y (2020) Self-paced multi-view co-training. J Mach Learn Res

  46. Xiang L, Ding G, Han J (2020) Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp 247–263

  47. Jin X, Peng B, Wu Y, Liu Y, Liu J, Liang D, Yan J, Hu X (2019) Knowledge distillation via route constrained optimization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 1345–1354

  48. Wang Y, Zhou W, Jiang T, Bai X, Xu Y (2020) Intra-class feature variation distillation for semantic segmentation. In: European Conference on Computer Vision, pp 346–362

  49. Wang F, Yan J, Meng F, Zhou J (2021) Selective knowledge distillation for neural machine translation. arXiv preprint arXiv:2105.12967

  50. Jiang L, Meng D, Mitamura T, Hauptmann AG (2014) Easy samples first: Self-paced reranking for zero-example multimedia search. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp 547–556

  51. Milletari F, Navab N, Ahmadi S-A (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp 565–571

  52. Porwal P, Pachade S, Kamble R, Kokare M, Deshmukh G, Sahasrabuddhe V, Meriaudeau F (2018) Indian diabetic retinopathy image dataset (idrid): a database for diabetic retinopathy screening research. Data 3(3):25

    Article  Google Scholar 

  53. Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L et al (2020) Idrid: Diabetic retinopathy-segmentation and grading challenge. Med Image Anal 59:101561

    Article  PubMed  Google Scholar 

  54. Li T, Gao Y, Wang K, Guo S, Liu H, Kang H (2019) Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf Sci 501:511–522

    Article  ADS  Google Scholar 

  55. Jha D, Smedsrud PH, Riegler MA, Johansen D, De Lange T, Halvorsen P, Johansen HD (2019) Resunet++: An advanced architecture for medical image segmentation. In: 2019 IEEE International Symposium on Multimedia (ISM), pp 225–2255

  56. Wang H, Cao P, Wang J, Zaiane OR (2022) Uctransnet: rethinking the skip connections in u-net from a channel-wise perspective with transformer. Proceedings of the AAAI Conference on Artificial Intelligence 36:2441–2449

    Article  Google Scholar 

  57. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, Lu L, Yuille AL, Zhou Y (2021) Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306

  58. Sivaswamy J, Krishnadas SR, Datt Joshi G, Jain M, Syed Tabish AU (2014) Drishti-gs: Retinal image dataset for optic nerve head(onh) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp 53–56. https://doi.org/10.1109/ISBI.2014.6867807

  59. Staal J, Abràmoff MD, Niemeijer M, Viergever MA, Van Ginneken B (2004) Ridge-based vessel segmentation in color images of the retina. IEEE Trans Med Imaging 23(4):501–509

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (No.62076059) and the Science Project of Liaoning Province (2021-MS-105).

Author information

Authors and Affiliations

Authors

Contributions

Yaqi Wang proposed the method and conducted the experiments, analyzed the data, and wrote the manuscript. Qingshan Hou and Peng Cao supervised the project and participated in manuscript revisions. Jinzhu Yang and Osmar R. Zaiane provided critical reviews that helped improve the manuscript.

Corresponding author

Correspondence to Peng Cao.

Ethics declarations

Ethical and Informed Consent for Data Used

This study does not contain any studies with human participants or animals performed by any of the authors.

Competing Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Hou, Q., Cao, P. et al. Lesion-aware knowledge distillation for diabetic retinopathy lesion segmentation. Appl Intell 54, 1937–1956 (2024). https://doi.org/10.1007/s10489-024-05274-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05274-8

Keyword

Navigation