Skip to main content

Edge-Prior Contrastive Transformer for Optic Cup and Optic Disc Segmentation

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14429))

Included in the following conference series:

  • 346 Accesses

Abstract

Optic Cup and Optic Disc segmentation plays a vital role in retinal image analysis, with significant implications for automated diagnosis. In fundus images, due to the difference between intra-class features and the complexity of inter-class features, existing methods often fail to explicitly consider the correlation and discrimination of target edge features. To overcome this limitation, our method aims to capture interdependency and consistency by involving differences in pixels on the edge. To accomplish this, we propose an Edge-Prior Contrastive Transformer (EPCT) architecture to augment the focus on the indistinct edge information. Our method incorporates pixel-to-pixel and pixel-to-region contrastive learning to achieve higher-level semantic information and global contextual feature representations. Furthermore, we incorporate prior information on edges with the Transformer model, which aims to capture the prior knowledge of the location and structure of the target edges. In addition, we propose an anchor sampling strategy tailored to the edge regions to achieve efficient edge features. Experimental results on three publicly available datasets demonstrate the effectiveness of our proposed method, as it achieves excellent segmentation performance.

Y. Feng and S. Zhou—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Almazroa, A., Burman, R., Raahemifar, K., Lakshminarayanan, V.: Optic disc and optic cup segmentation methodologies for glaucoma image detection: a survey. J. Ophthalmol. 2015, 180972 (2015)

    Google Scholar 

  2. Aquino, A., Gegúndez-Arias, M.E., Marín, D.: Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. TMI 29(11), 1860–1869 (2010)

    Google Scholar 

  3. Cao, H., et al.: Swin-Unet: unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) Computer Vision, ECCV 2022 Workshops. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25066-8_9

  4. Chen, J., et al.: TransUNet: transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)

  5. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020)

    Google Scholar 

  6. Fan, D.-P., Ji, G.-P., Zhou, T., Chen, G., Fu, H., Shen, J., Shao, L.: PraNet: parallel reverse attention network for polyp segmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12266, pp. 263–273. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59725-2_26

    Chapter  Google Scholar 

  7. Firdaus-Nawi, M., Noraini, O., Sabri, M., Siti-Zahrah, A., Zamri-Saad, M., Latifah, H.: DeepLabv3+ _encoder-decoder with atrous separable convolution for semantic image segmentation. Pertanika J. Trop. Agric. Sci. 34(1), 137–143 (2011)

    Google Scholar 

  8. Fu, H., Cheng, J., Xu, Y., Wong, D.W.K., Liu, J., Cao, X.: Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. TMI 37(7), 1597–1605 (2018)

    Google Scholar 

  9. Fumero, F., Alayón, S., Sanchez, J.L., Sigut, J., Gonzalez-Hernandez, M.: RIM-ONE: an open retinal image database for optic nerve evaluation. In: CBMS, pp. 1–6 (2011)

    Google Scholar 

  10. Gu, Z.: CE-Net: context encoder network for 2D medical image segmentation. TMI 38(10), 2281–2292 (2019)

    Google Scholar 

  11. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020)

    Google Scholar 

  12. Huang, H., et al.: UNet 3+: a full-scale connected Unet for medical image segmentation. In: ICASSP, pp. 1055–1059 (2020)

    Google Scholar 

  13. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Meth. 18(2), 203–211 (2021)

    Article  Google Scholar 

  14. Jiang, Y., et al.: JointRCNN: a region-based convolutional neural network for optic disc and cup segmentation. TBE 67(2), 335–343 (2019)

    Google Scholar 

  15. Joshi, G.D., Sivaswamy, J., Krishnadas, S.: Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. TMI 30(6), 1192–1205 (2011)

    Google Scholar 

  16. Li, S., Sui, X., Luo, X., Xu, X., Liu, Y., Goh, R.: Medical image segmentation using squeeze-and-expansion transformers. arXiv preprint arXiv:2105.09511 (2021)

  17. Liu, B., Pan, D., Shuai, Z., Song, H.: ECSD-Net: a joint optic disc and cup segmentation and glaucoma classification network based on unsupervised domain adaptation. Comput. Meth. Prog. Bio. 213, 106530 (2022)

    Article  Google Scholar 

  18. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)

    Google Scholar 

  19. Lu, S.: Accurate and efficient optic disc detection and segmentation by a circular transformation. TMI 30(12), 2126–2133 (2011)

    Google Scholar 

  20. Luo, L., Xue, D., Pan, F., Feng, X.: Joint optic disc and optic cup segmentation based on boundary prior and adversarial learning. IJARS 16(6), 905–914 (2021)

    Google Scholar 

  21. Luthra, A., Sulakhe, H., Mittal, T., Iyer, A., Yadav, S.: Eformer: edge enhancement based transformer for medical image denoising. arXiv arXiv:2109.08044 (2021)

  22. Misra, I., Maaten, L.: Self-supervised learning of pretext-invariant representations. In: CVPR, pp. 6707–6717 (2020)

    Google Scholar 

  23. Mittapalli, P.S., Kande, G.B.: Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Sig. Process. 24, 34–46 (2016)

    Article  Google Scholar 

  24. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  25. Orlando, J.I., Fu, H., Breda, J.B., Van Keer, K., Bathula, D.R., et al.: Refuge challenge: a unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Media 59, 101570 (2020)

    Google Scholar 

  26. Pachade, S., Porwal, P., Kokare, M., Giancardo, L., Mériaudeau, F.: NENet: Nested EfficientNe and adversarial learning for joint optic disc and cup segmentation. Media 74, 102253 (2021)

    Google Scholar 

  27. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  28. Sivaswamy, J., Krishnadas, S., Joshi, G.D., Jain, M., Tabish, A.U.S.: Drishti-GS: retinal image dataset for optic nerve head (ONH) segmentation. In: ISBI, pp. 53–56 (2014)

    Google Scholar 

  29. Wang, W., Zhou, T., Yu, F., Dai, J., Konukoglu, E., Van Gool, L.: Exploring cross-image pixel contrast for semantic segmentation. In: ICCV, pp. 7303–7313 (2021)

    Google Scholar 

  30. Wang, X., Zhang, R., Shen, C., Kong, T., Li, L.: Dense contrastive learning for self-supervised visual pre-training. In: CVPR, pp. 3024–3033 (2021)

    Google Scholar 

  31. Zhao, X., et al.: Contrastive learning for label efficient semantic segmentation. In: ICCV, pp. 10623–10633 (2021)

    Google Scholar 

  32. Zheng, S., et al.: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In: CVPR, pp. 6881–6890 (2021)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Science Foundation of China under Grants 62076142 and 62241603, in part by the National Key Research and Development Program of Ningxia under Grant 2023AAC05009, 2022BEG03158 and 2021BEB0406.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhendong Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, Y., Zhou, S., Wang, Y., Li, Z., Liu, H. (2024). Edge-Prior Contrastive Transformer for Optic Cup and Optic Disc Segmentation. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14429. Springer, Singapore. https://doi.org/10.1007/978-981-99-8469-5_35

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8469-5_35

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8468-8

  • Online ISBN: 978-981-99-8469-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics