Skip to main content
Log in

Classification of Multiple Retinal Disorders from Enhanced Fundus Images Using Semi-supervised GAN

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Automatic detection of retinal disorders is gaining considerable attention with the emergence of deep learning. Ophthalmologists primarily use color fundus photographs to examine the human retina and diagnose the abnormalities. As there is a surge in the number of visual impairments, an AI-enabled retina screening system can expedite the retina examination process. Existing works in this direction are primarily focused on either segmentation or classification. Furthermore, the majority of the works are implemented using preprocessed good quality fundus images. In reality, however, the quality of color fundus images is degraded due to the illumination inhomogeneity and low contrast issues. Thus, there is a need to develop an end-to-end fundus image analysing system. Steering in this direction, the proposed work attempts to analyze the performance of semi-supervised Generative Adversarial Networks (GANs) for the classification of retinal fundus images into multiple categories. Besides, the nonlocal retinex framework is applied to enhance the quality of fundus images without over-smoothing the edges. The large data set of raw fundus acquired from multiple Eye hospitals and released in public domain is used to implement the proposed work. The results obtained are compared with the transfer learning method, and an average accuracy of 87% is obtained. It suggests that the semi-supervised GANs can be potentially used to classify heterogeneous retinal disorders.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Amit B, Jaya PV, Muna B. Techniques of fundus imaging. Sci J Med Vis Res Found. 2015;XXXIII:100–7.

    Google Scholar 

  2. Jiang P, Dou Q, Shi L. Ophthalmologist-level classification of fundus disease with deep neural networks. Transl Vis Sci Technol. 2020;9(2):39. https://doi.org/10.1167/tvst.9.2.39.

    Article  Google Scholar 

  3. Vision A. Magnitude and projections: projected change in vision loss 2020–2050. 2020. https://www.iapb.org/learn/vision-atlas/magnitude-and-projections/projected-change/.

  4. Ogurtsova K, da Rocha-Fernandes JD, Huang Y, et al. Idf diabetes atlas: global estimates for the prevalence of diabetes for 2015 and 2040. Diabetes Res Clin Pract. 2017;128:40–50. https://doi.org/10.1016/j.diabres.2017.03.024.

    Article  Google Scholar 

  5. Raj A, Tiwari AK, Martini MG. Fundus image quality assessment: survey, challenges, and future scope. IET Image Process. 2019;13(8):1211–24. https://doi.org/10.1049/iet-ipr.2018.6212.

    Article  Google Scholar 

  6. Lim G, Bellemo V, Xie Y, et al. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. Eye Vis. 2020. https://doi.org/10.1186/s40662-020-00182-7.

    Article  Google Scholar 

  7. Smitha A, Padikkal J. A semi-supervised generative adversarial network for retinal analysis from fundus images. Berlin: Springer; 2021. p. 351–62. https://doi.org/10.1007/978-981-16-1086-8_31.

    Book  Google Scholar 

  8. Sarki R, Ahmed K, Wang H, Zhang Y. Automatic detection of diabetic eye disease through deep learning using fundus images: a survey. IEEE Access. 2020;8:151133–49. https://doi.org/10.1109/ACCESS.2020.3015258.

    Article  Google Scholar 

  9. Malik S, Kanwal N, Asghar MN, Sadiq MAA, Karamat I, Fleury M. Data driven approach for eye disease classification with machine learning. Appl Sci 2019;9(14). https://doi.org/10.3390/app9142789. https://www.mdpi.com/2076-3417/9/14/2789 .

  10. Tsiknakis N, Theodoropoulos D, Manikis G, Ktistakis E, Boutsora O, Berto A, Scarpa F, Scarpa A, Fotiadis DI, Marias K. Deep learning for diabetic retinopathy detection and classification based on fundus images: a review. Comput Biol Med 2021; 104599.

  11. Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated detection and diagnosis of diabetic retinopathy: a comprehensive survey. J Imaging 2021;7(9). https://doi.org/10.3390/jimaging7090165, https://www.mdpi.com/2313-433X/7/9/165.

  12. Qureshi I, Ma J, Abbas Q. Diabetic retinopathy detection and stage classification in eye fundus images using active deep learning. Multimed Tools Appl. 2021;80(8):11691–721.

    Article  Google Scholar 

  13. Xie Y, Wan Q, Xie H, Lei B, Tan EL, Xu Y. Semi-supervised gans with complementary generator pair for retinopathy screening. In: 2020 25th International Conference on Pattern Recognition (ICPR), 2021; pp. 4821–4828. IEEE.

  14. Gour N, Khanna P. Multi-class multi-label ophthalmological disease detection using transfer learning based convolutional neural network. Biomed Signal Process Control 2021;66:102329. https://doi.org/10.1016/j.bspc.2020.102329, https://www.sciencedirect.com/science/article/pii/S1746809420304432.

  15. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2015; CoRR abs/1512.03385.

  16. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. 2015; CoRR abs/1512.00567.

  17. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, 2015; http://arxiv.org/abs/1409.1556.

  18. Islam MT, Imran SA, Arefeen A, Hasan M, Shahnaz C. Source and camera independent ophthalmic disease recognition from fundus image using neural network. In: 2019 IEEE International Conference on Signal Processing, Information, Communication Systems (SPICSCON), 2019; pp. 59–63. doi: https://doi.org/10.1109/SPICSCON48833.2019.9065162.

  19. Li C, Ye J, He J, Wang S, Qiao Y, Gu L. Dense correlation network for automated multi-label ocular disease detection with paired color fundus photographs. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 2020; pp. 1–4. doi: https://doi.org/10.1109/ISBI45749.2020.9098340.

  20. Huang G, Liu Z, Weinberger KQ. Densely connected convolutional networks. 2016; CoRR abs/1608.06993.

  21. Li N, Li T, Hu C, Wang K, Kang H. A benchmark of ocular disease intelligent recognition: one shot for multi-disease detection. In: Wolf F, Gao W, editors. Benchmarking, measuring, and optimizing. Cham: Springer International Publishing; 2021. p. 177–93.

    Chapter  Google Scholar 

  22. He J, Li C, Ye J, Qiao Y, Gu L. Multi-label ocular disease classification with a dense correlation deep neural network. Biomed Signal Process Control 2021;63:102167. https://doi.org/10.1016/j.bspc.2020.102167, https://www.sciencedirect.com/science/article/pii/S1746809420303062.

  23. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: efficient convolutional neural networks for mobile vision applications. 2017; CoRR abs/1704.04861.

  24. Tan M, Le QV. Efficientnet: rethinking model scaling for convolutional neural networks. 2019; CoRR abs/1905.11946.

  25. Chollet F. Xception: deep learning with depthwise separable convolutions. 2016; CoRR abs/1610.02357.

  26. Wang J, Yang L, Huo Z, He W, Luo J. Multi-label classification of fundus images with efficient net. IEEE Access. 2020;8:212499–508. https://doi.org/10.1109/ACCESS.2020.3040275.

    Article  Google Scholar 

  27. Ram A, Reyes-Aldasoro CC. The relationship between fully connected layers and number of classes for the analysis of retinal images 2020.

  28. Jordi C, Joan MN, Carles V. Ocular disease intelligent recognition through deep learning architectures. Catalunya: Universitat Oberta de Catalunya; 2019. p. 1–114.

    Google Scholar 

  29. Meller G. Ocular disease recognition using convolutional neural networks. Milan: University of Milan; 2020.

    Google Scholar 

  30. Vikram B. Clean data is the foundation of effective machine learning. The Newstack accessed on May 2021 2020. https://thenewstack.io/clean-data-is-the-foundation-of-effective-machine-learning/.

  31. Samiksha P, Prasanna P, Dhanshree T, Manesh K, Girish D, Vivek S, Luca G, Gwenolé Q, Fabrice M. Retinal fundus multi-disease image dataset (RFMID). IEEE Dataport. 2020. https://doi.org/10.21227/s3g7-st65.

    Article  Google Scholar 

  32. Zosso D, Tran G, Osher SJ. Non-local retinex—a unifying framework and beyond. SIAM J Imaging Sci. 2015;8(2):787–826. https://doi.org/10.1137/140972664.

    Article  MathSciNet  MATH  Google Scholar 

  33. Febin I, Jidesh P. Despeckling and enhancement of ultrasound images using non-local variational framework. Vis Comput. 2021. https://doi.org/10.1007/s00371-021-02076-8.

    Article  Google Scholar 

  34. Kimmel R, Elad M, Shaked D, et al. A variational framework for retinex. Int J Comput Vis. 2003;52:7–23. https://doi.org/10.1023/A:1022314423998.

    Article  MATH  Google Scholar 

  35. Goldstein T, Osher S. The split Bregman method for l1-regularized problems. SIAM J Imaging Sci. 2009;2(2):323–43. https://doi.org/10.1137/080725891.

    Article  MathSciNet  MATH  Google Scholar 

  36. Goldstein T, Bresson X, Osher S. Geometric applications of the split Bregman method: segmentation and surface reconstruction. J Sci Comput. 2010;45:272–93. https://doi.org/10.1007/s10915-009-9331-z.

    Article  MathSciNet  MATH  Google Scholar 

  37. Liu X, Huang L. Split Bregman iteration algorithm for total bounded variation regularization based image deblurring. J Math Anal Appl 2010;372(2):486–495. https://doi.org/10.1016/j.jmaa.2010.07.013, https://www.sciencedirect.com/science/article/pii/S0022247X10005834.

  38. Kimmel R, Elad M, Shaked D, et al. A variational framework for retinex. Int J Comput Vis. 2003;52(1):7–23.

    Article  Google Scholar 

  39. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial networks. 2014; arXiv e-prints arXiv:1406.2661.

  40. Odena A. Semi-supervised learning with generative adversarial networks. 2016; arXiv e-prints arXiv:1606.01583.

  41. Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. 2020. arXiv, stat.ML 2008.05756. Accessed: 23 Jun 2021.

  42. Artstein R, Poesio M. Inter-coder agreement for computational linguistics. Comput Linguist. 2008;34(4):555–96.

    Article  Google Scholar 

  43. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1):37–46.

    Article  Google Scholar 

Download references

Acknowledgements

Ms. Smitha A. expresses her gratitude to the Ministry of Education, Government of India, for providing financial support (as fellowship) for carrying out the research at National Institute of Technology Karnataka, Surathkal.

Funding

Dr. P. Jidesh wishes to thank the Department of Atomic Energy, Govt. of India, for providing financial support under the research grant no. 02011/17/2020NBHM(RP)/R&DII/8073.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Jidesh.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection ‘Progresses in Image Processing’ guest edited by P. Nagabhushan, Peter Peer, Partha Pratim Roy and Satish Kumar Singh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Smitha, A., Jidesh, P. Classification of Multiple Retinal Disorders from Enhanced Fundus Images Using Semi-supervised GAN. SN COMPUT. SCI. 3, 59 (2022). https://doi.org/10.1007/s42979-021-00945-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00945-6

Keywords

Navigation