Skip to main content
Log in

Pterygium-Net: a deep learning approach to pterygium detection and localization

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Automatic pterygium detection is an essential screening tool for health community service groups. It allows non-expert to perform screening process without the needs of big and expensive equipment, especially for the application in rural areas. Thence, patients who have been screened as positive pterygium will be referred to the certified medical personnel for further diagnosis and treatment. Current state-of-the-art algorithms for pterygium detection rely on basic machine learning approach such as artificial neural network and support vector machine, which have not yet achieved high detection sensitivity and specificity as required in standard medical practice. Hence, a deep learning approach based on fully convolutional neural networks is proposed to detect and localize the pterygium infected tissues automatically. The input image requirement for the developed system is low as any commercial mobile phone camera is sufficient. Moreover, the developed algorithm, which we refer as Pterygium-Net works well even if the eye image is captured under low lighting condition with pupil position is not at the center location. Pterygium-Net utilizes three layers of convolutional neural networks (CNN) and three layers of fully connected networks. Two steps are implemented to overcome lacks of training data by generating synthetic images and pre-training the CNN weights and biases in a different public dataset. As for pterygium localization, an additional step of box proposal based on edges information is used to generate possible regions of the pterygium infected tissues. Hanning window is also applied to the generated regions to give more weightage to the center area. Experimental results show that Pterygium-Net produces high average detection sensitivity and specificity of 0.95 and 0.983, respectively. As for pterygium tissues localization, the algorithm achieves 0.811 accuracy with a very low failure rate of 0.053. In the future, deeper networks can be implemented to further improve pterygium localization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Chatfield K, Simonyan K, Vedaldi A, Zisserman A (2014) Return of the devil in the details: delving deep into convolutional networks. In: British machine vision conference

  2. Chen X, Xu Y, Yan S, Wong DWK, Wong TY, Liu J (2015) Automatic feature learning for glaucoma detection based on deep learning. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical image computing and computer-assisted intervention – MICCAI 2015. Springer, Cham, pp 669–677

    Google Scholar 

  3. Crewe JM, Threlfall T, Clark A, Sanfilippo PG, Mackey DA (2017) Pterygia are indicators of an increased risk of developing cutaneous melanomas. Br J Ophthalmol

  4. Dash J, Bhoi N (2017) A thresholding based technique to extract retinal blood vessels from fundus images. Future Comput Inform J 2(2):103–109

    Article  Google Scholar 

  5. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition, pp 248–255

  6. Dörfler M, Grill T, Bammer R, Flexer A (2017) Basic filters for convolutional neural networks: training or design? arXiv:1709.02291

  7. Eladawi N, Elmogy M, Khalifa F, Ghazal M, Ghazi N, Aboelfetouh A, Riad A, Sandhu H, Schaal S, El-Baz A (2018) Early diabetic retinopathy diagnosis based on local retinal blood vessels analysis in optical coherence tomography angiography (octa) images. Medical Phys 0:0

    Google Scholar 

  8. Gao X, Wong DWK, Aryaputera AW, Sun Y, Cheng CY, Cheung C, Wong TY (2012) Automatic pterygium detection on cornea images to enhance computer-aided cortical cataract grading system. In: Annual international conference of the IEEE engineering in medicine and biology society, pp 4434–4437

  9. Ghazi MM, Yanikoglu B, Aptoula E (2017) Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing 235:228–235

    Article  Google Scholar 

  10. Gondal WM, Köhler JM, Grzeszick R, Fink GA, Hirsch M (2017) Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In: 2017 IEEE international conference on image processing (ICIP), pp 2069–2073

  11. Hartwell A, Kadirkamanathan V, Anderson SR (2018) Compact deep neural networks for computationally efficient gesture classification from electromyography signals, pp 891–896

  12. He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. CoRR, arXiv:1512.03385

  13. Jamaludin A, Kadir T, Zisserman A (2017) Spinenet: automated classification and evidence visualization in spinal mris. Med Image Anal 41:63–73. special Issue on the 2016 Conference on Medical Image Computing and Computer Assisted Intervention (Analog to MICCAI 2015)

    Article  Google Scholar 

  14. Kim M, Janssens O, Park H-M, Zuallaert J, Van Hoecke S, De Neve W (2018) Web applicable computer-aided diagnosis of glaucoma using deep learning. In: Machine learning for health (ML4h) workshop at NeurIPS

  15. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations

  16. Lam C, Yi D, Guo M, Lindsey T (2018) Automated detection of diabetic retinopathy using deep learning. In: AMIA joint summits on translational science proceedings. AMIA Joint Summits on Translational Science, no. 147–155

  17. Lawhern VJ, Solon AJ, Waytowich NR, Gordon SM, Hung CP, Lance BJ (2018) EEGNEt: a compact convolutional neural network for EEG-based brain–computer interfaces. J Neural Eng 15(5):1–30

    Article  Google Scholar 

  18. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88

    Article  Google Scholar 

  19. Liu L, Wu J, Geng J, Yuan Z, Huang D (2013) Geographical prevalence and risk factors for pterygium: a systematic review and meta-analysis. BMJ Open 3 (11):1–8

    Google Scholar 

  20. Maheshwari S (2003) Effect of pterygium excision on pterygium induced astigmatism. Indian J Opthalmology 51(2):187–188

    Google Scholar 

  21. Minami K, Miyata K, Otani A, Tokunaga T, Tokuda S, Amano S (2018) Detection of increase in corneal irregularity due to pterygium using fourier series harmonic analyses with multiple diameters. Jpn J Ophthalmol 62(3):342–348

    Article  Google Scholar 

  22. Mohamed NA, Zulkifley MA, Hussain A, Mustapha A (2015) Local binary patterns and modified red channel for optic disc segmentation. J Theor Appl Inf Technol 81(1):84–91

    Google Scholar 

  23. Nam H, Baek M, Han B (2016) Modeling and propagating cnns in a tree structure for visual tracking. CoRR, arXiv:1608.07242

  24. OwaisAli Chishti S, Riaz S, Bilalzaib M, Nauman M (2018) Self-driving cars using cnn and q-learning. In: 2018 IEEE 21st international multi-topic conference (INMIC), pp 1–7

  25. Padmanabha AGA, Appaji MA, Prasad M, Lu H, Joshi S (2017) Classification of diabetic retinopathy using textural features in retinal color fundus image. In: 2017 12th International conference on intelligent systems and knowledge engineering (ISKE), pp 1–5

  26. Paing MP, Choomchuay S, Yodprom MDR (2016) Detection of lesions and classification of diabetic retinopathy using fundus images. In: 2016 9th Biomedical engineering international conference (BMEiCON), pp 1–5

  27. Ravishankar H, Sudhakar P, Venkataramani R, Thiruvenkadam S, Annangi P, Babu N, Vaidya V (2016) Understanding the mechanisms of deep transfer learning for medical images. In: MICCAI workshop on deep learning in medical image analysis

  28. Russakovsky O, et al. (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  29. Salam AA, Khalil T, Akram MU, Jameel A, Basit I (2016) Automated detection of glaucoma using structural and non structural features. SpringerPlus 5 (1):1519

    Article  Google Scholar 

  30. Shi Z, Hao H, Zhao M, Feng Y, He L, Wang Y, Suzuki K (2019) A deep cnn based transfer learning method for false positive reduction. Multimed Tools Appl 78(1):1017–1033

    Article  Google Scholar 

  31. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  32. Smitha M, Nisa AK, Archana K (2018) Diabetic retinopathy detection in fundus image using cross sectional profiles and ann. In: Hemanth DJ, Smys S (eds) Computational vision and bio inspired computing. Springer, Cham, pp 982–993

    Chapter  Google Scholar 

  33. Song X, Feng F, Han X, Yang X, Liu W, Nie L (2018) Neural compatibility modeling with attentive knowledge distillation. In: The 41st international ACM SIGIR conference on research & development in information retrieval, ser. SIGIR’18. ACM, New York, pp 5–14

  34. Song X, Feng F, Liu J, Li Z, Nie L, Ma J (2017) Neurostylist: neural compatibility modeling for clothing matching. In: Proceedings of the 25th ACM international conference on multimedia, ser. MM’17. ACM, New York, pp 753–761

  35. Torrey L, Shavlik J (2009) Transfer learning, in handbook of research on machine learning applications. In: Soria E, Martin J, Magdalena R, Martinez M, Serrano A (eds). IGI Global, pp 1–22

  36. Wang J, Zheng H, Huang Y, Ding X (2018) Vehicle type recognition in surveillance images from labeled web-nature data using deep transfer learning. IEEE Trans Intell Transp Syst 19(9):2913–2922

    Article  Google Scholar 

  37. Yu Y, Lin H, Meng J, Wei X, Guo H, Zhao Z (2017) Deep transfer learning for modality classification of medical images. Information 8(3)

    Article  Google Scholar 

  38. Zaki WMDW, Daud MM, Abdani SR, Hussain A, Mutalib HA (2018) Automated pterygium detection method of anterior segment photographed images. Comput Methods Prog Biomed 154:71–78

    Article  Google Scholar 

  39. Zitnick CL, Dollár P (2014) Edge boxes: locating object proposals from edges. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision – ECCV 2014. Springer, Cham, pp 391–405

    Chapter  Google Scholar 

  40. Zulkifley MA (2019) Two streams multiple-model object tracker for thermal infrared video. IEEE Access 7:32 383–32 392

    Article  Google Scholar 

  41. Zulkifley MA, Mohamed NA, Zulkifley NH (2019) Squat angle assessment through tracking body movements. IEEE Access

  42. Zulkifley MA, Trigoni N (2018) Multiple-model fully convolutional neural networks for single object tracking on thermal infrared video. IEEE Access 6:42 790–42 799

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohd Asyraf Zulkifley.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors would like to acknowledge fundings from Universiti Kebangsaan Malaysia (Dana Impak Perdana: DIP-2015-006 and Geran Universiti Penyelidikan: GUP-2015-053). The Titan V used for this research was donated by the NVIDIA Corporation (KK-2019-005).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zulkifley, M.A., Abdani, S.R. & Zulkifley, N.H. Pterygium-Net: a deep learning approach to pterygium detection and localization. Multimed Tools Appl 78, 34563–34584 (2019). https://doi.org/10.1007/s11042-019-08130-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-08130-x

Keywords

Navigation