Skip to main content
Log in

Mask Distillation Network for Conjunctival Hyperemia Severity Classification

  • Research Article
  • Published:
Machine Intelligence Research Aims and scope Submit manuscript

Abstract

To achieve automatic, fast and accurate severity classification of bulbar conjunctival hyperemia severity, we proposed a novel prior knowledge-based framework called mask distillation network (MDN). The proposed MDN consists of a segmentation network and a classification network with teacher-student branches. The segmentation network is used to generate a bulbar conjunctival mask and the classification network divides the severity of bulbar conjunctival hyperemia into four grades. In the classification network, we feed the original image and the image with the bulbar conjunctival mask into the student and teacher branches respectively, and an attention consistency loss and a classification consistency loss are used to keep a similar learning mode for these two branches. This design of “different input but same output”, named mask distillation (MD), aims to introduce the regional prior knowledge that “bulbar conjunctival hyperemia severity classification is only related to the bulbar conjunctiva region”. Extensive experiments on 5 117 anterior segment images have proven the effectiveness of mask distillation technology: 1) The accuracy of the MDN student branch is 3.5% higher than that of a single optimal baseline network and 2% higher than that of the baseline network combination. 2) In the test phase, only the student branch is needed, and no additional segmentation network is required. The framework only takes 0.003 s to classify a single image, achieving the fastest speed in all the methods we compared. 3) Compared with a single baseline network, the attention of both teacher and student branches in the MDN has been intuitively improved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. P. J. Murphy, J. S. C. Lau, M. M. L. Sim, R. L. Woods. How red is a white eye? Clinical grading of normal conjunctival ypprraemia. Eye, vol. 21, no. 5, pp. 633–638, 2007. DOI: https://doi.org/10.1038/sj.eye.6702295.

    Article  Google Scholar 

  2. A. van der Woerdt. Management of intraocular inflammatory disease. Clinical Techniques in Small Animal Practice, vol. 16, no. 1, pp. 58–61, 2001. DOI: https://doi.org/10.1053/svms.2001.22807.

    Article  Google Scholar 

  3. M. Janssens. Efficacy of levocabastine in conjunctival provocation studies. Documenta Ophthalmologica, vol. 82, no. 4, pp. 341–351, 1992. DOI: https://doi.org/10.1007/BF00161022.

    Article  Google Scholar 

  4. F. Honrubia, J. García-Sánchez, V. Polo, J. M. M. de la Casa, J. Soto. Conjunctival hyperaemia with the use of latanoprost versus other prostaglandin analogues in patients with ocular hypertension or glaucoma: A meta-analysis of randomised clinical trials. British Journal of Ophthalmology, vol. 93, no. 3, pp. 316–321, 2009. DOI:https://doi.org/10.1136/bjo.2007.135111.

    Article  Google Scholar 

  5. D. S. Friedman, S. R. Hahn, L. Gelb, J. Tan, S. N. Shah, E. E. Kim, T. J. Zimmerman, H. A. Quigley. Doctor–patient communication, health-related beliefs, and adherence in glaucoma results from the glaucoma adherence and persistency study. Ophthalmology, vol. 15, no. 8, pp. 120–1327.e3, 2008. DOI: https://doi.org/10.1016/j.ophtha.2007.11.023.

    Google Scholar 

  6. T. Akagi, Y. Okamoto, T. Kameda, K. Suda, H. Nakanishi, M. Miyake, H. O. Ikeda, T. Yamada, S. Kadomoto, A. Uji, A. Tsujikawa. Short-term effects of different types of anti-glaucoma eyedrop on the sclero-conjunctival vasculature assessed using anterior segment OCTA in normal human eyes: A pilot study. Journal of Clinical Medicine, vol. 9, no. 2, Article number 4016, 2020. DOI: https://doi.org/10.3390/jcm9124016.

  7. E. Terao, S. Nakakura, Y. Fujisawa, Y. Nagata, K. Ueda, Y. Kobayashi, S. Oogi, S. Dote, M. Shiraishi, H. Tabuchi, T. Yoneda, A. Fukushima, R. Asaoka, Y. Kiuchi. Time course of conjunctival hyperemia induced by omidenepag isopropyl ophthalmic solution 0.002%: A pilot, comparative study versus ripasudil 0.4%. BMJ Open Ophthalmology, vol. 5, no. 1, Article number e000538, 2020. DOI: https://doi.org/10.1136/bmjophth-2020-000538.

  8. C. W. McMonnies, A. Chapman-Davies. Assessment of conjunctival hyperemia in contact lens wearers. Part I. American Academy of Optometry, vol. 64, no. 4, pp. 246–250, 1987. DOI: https://doi.org/10.1097/00006324-198704000-00003.

    Article  Google Scholar 

  9. R. L. Terry, C. M. Schnider, B. A. Holden, R. Cornish, T. Grant, D. Sweeney, D. L. Hood, A. Back. CCLRU standards for success of daily and extended wear contact lenses. Optometry and Vision Science, vol. 70, no. 3, pp. 234–243, 1993. DOI: https://doi.org/10.1097/00006324-199303000-00011.

    Article  Google Scholar 

  10. N. Efron. Grading scales for contact lens complications. Ophthalmic and Physiological Optics, vol. 18, no. 2, pp. 182–186, 1998. DOI: https://doi.org/10.1016/S0275-5408(97)00066-5.

    Article  Google Scholar 

  11. M. M. Schulze, D. A. Jones, T. L. Simpson. The development of validated bulbar redness grading scales. Optometry and Vision Science, vol. 84, no. 10, pp. 976–983, 2007. DOI: https://doi.org/10.1097/OPX.0b013e318157ac9e.

    Article  Google Scholar 

  12. W. C. Stewart, A. E. Kolker, J. A. Stewart, J. Leech, A. L. Jackson. Conjunctival hyperemia in healthy subjects alter short-term dosing with latanoprost, bimatoprost, and travoprost. Americian Journal of Ophthalmology, vol. 135, no. 3, pp. 314–320, 2003. DOI: https://doi.org/10.1016/S0002-9394(02)01980-3.

    Article  Google Scholar 

  13. M. R. H. M. Adnan, A. M. Zain, H. Haron, R. Alwee, M. Z. C. Azemin, A. O. Ibrahim. Eye redness image processing techniques. Journal of Physics: Conference Series, vol. 892, Article number 012019, 2017. DOI: https://doi.org/10.1088/1742-6596/892/1/012019.

  14. M. M. Schulze, N. Hutchings, T. L. Simpson. Grading bulbar redness using cross-calibrated clinical grading scales. Investigative Ophthalmology & Visual Science, vol. 52, no. 8, pp. 5812–5817, 2011. DOI: https://doi.org/10.1167/iovs.10-7006.

    Article  Google Scholar 

  15. T. Yoneda, T. Sumi, A. Takahashi, Y. Hoshikawa, M. Kobayashi, A. Fukushima. Automated hyperemia analysis software: Reliability and reproducibility in healthy subjects. Japanese Journal of Ophthalmology, vol. 56, no. 1, pp. 1–7, 2012. DOI: https://doi.org/10.1007/s10384-011-0107-2.

    Article  Google Scholar 

  16. T. Sumi, T. Yoneda, K. Fukuda, Y. Hoshikawa, M. Kobayashi, M. Yanagi, Y. Kiuchi, K. Yasumitsu-Lovell, A. Fukushima. Development of automated conjunctival hyperemia analysis software. Cornea, vol. 32, no. 1, pp. S52–S59, 2013. DOI: https://doi.org/10.1097/ICO.0b013e3182a18e44.

    Article  Google Scholar 

  17. B. Huntjens, M. Basi, M. Nagra. Evaluating a new objective grading software for conjunctival hyperaemia. Contact Lens and Anterior Eye, vol. 43, no. 2, pp. 137–143, 2020. DOI: https://doi.org/10.1016/j.clae.2019.07.003.

    Article  Google Scholar 

  18. H. Masumoto, H. Tabuchi, T. Yoneda, S. Nakakura, H. Ohsugi, T. Sumi, A. Fukushima. Severity classiOication of conjunctival hyperaemia by deep neural network ensembles. Journal of Ophthalmology, vol. 2019, Article number 7820971, 2019. DOI: https://doi.org/10.1155/2019/7820971.

  19. X. Q. Zhang, Y. Hu, Z. J. Xiao, J. S. Fang, R. Higashita, J. Liu. Machine learning for cataract classiOication/grading on ophthalmic imaging modalities: A survey. Machine Intelligence Research, vol. 19, no. 3, pp. 184–208, 2022. DOI: https://doi.org/10.1007/s11633-022-1329-0.

    Article  Google Scholar 

  20. G. Hinton, O. Binyals, J. Dean. Distilling the knowledge in a neural network. Computer Science, vol. 14, no. 7, pp. 38–39, 2015.

    Google Scholar 

  21. S. Zagoruyko, N. Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.

  22. J. Yim, D. Joo, J. Bae, J. Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 7130–7138, 2017. DOI: https://doi.org/10.1109/CVPR.2017.754.

  23. Z. H. Huang, N. Y. Wang. Like what you like: Knowledge distill via neuron selectivity transfer, [Online], Available: https://arxiv.org/abs/1707.01219, 2017.

  24. S. H. Lin, H. W. Xie, B. Wang, K. C. Yu, X. J. Chang, X. D. Liang, G. Wang. Knowledge distillation via the target-aware transformer. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, New Orleans, USA, pp. 10905–10914, 2022. DOI: https://doi.org/10.1109/CVPR52688.2022.01064.

    Google Scholar 

  25. J. P. Gou, B. S. Yu, S. J. Maybank, D. C. Tao. Knowledge distillation: A survey. International Journal of Computer Vision, vol. 129, no. 6, pp. 1789–1819, 2021. DOI: https://doi.org/10.1007/s11263-021-01453-z.

    Article  Google Scholar 

  26. H. Guo, K. Zheng, X. C. Fan, H. K. Yu, S. Wang. Visual attention consistency under image transforms for multi-label image classification. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 729–739, 2019. DOI: https://doi.org/10.1109/CVPR.2019.00082.

    Google Scholar 

  27. B. L. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba. Learning deep features for discriminative localization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 2921–2929, 2016. DOI: https://doi.org/10.1109/CVPR.2016.319.

  28. R. Ghosh, K. Ghosh, S. Maitra. Automatic detection and classification of diabetic retinopathy stages using CNN. In Proceedings of the 4th International Conference on Signal Processing and Integrated Networks, IEEE, Noida, India, pp. 550–554, 2017. DOI: https://doi.org/10.1109/SPIN.2017.8050011.

    Google Scholar 

  29. A. S. Krishnan, D. C. R, V. Bhat, P. B. Ramteke, S. G. Koolagudi. A transfer learning approach for diabetic retinopathy classification using deep convolutional neural networks. In Proceedings of the 15th IEEE India Council International Conference, Coimbatore, India, pp. 1–6, 2018. DOI: https://doi.org/10.1109/INDICON45594.2018.8987131.

  30. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, vol. 128, no. 2, pp. 336–359, 2020. DOI: https://doi.org/10.1007/s11263-019-01228-7.

    Article  Google Scholar 

  31. K. Simonyan, A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA, 2015.

  32. K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770–778, 2015. DOI: https://doi.org/10.1109/CVPR.2016.90.

  33. G. Huang, Z. Liu, L. van der Maaten, K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 2261 -2269, 2017. DOI: https://doi.org/10.1109/CVPR.2017.243.

  34. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 2818–2826, 2016. DOI: https://doi.org/10.1109/CVPR.2016.308.

  35. F. Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 1800–1807, 2017. DOI: https://doi.org/10.1109/CVPR.2017.195.

  36. M. Sandler, A. Howard, M. L. Zhu, A. Zhmoginov, L. C. Chen. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 4510–4520, 2018. DOI: https://doi.org/10.1109/CVPR.2018.00474.

    Google Scholar 

  37. J. Hu, L. Shen, G. Sun. “Squeeze-and-Excitation networks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 7132–7141, 2018. DOI: https://doi.org/10.1109/CVPR.2018.00745.

    Google Scholar 

  38. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, K. Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size, [Online], Available: https://arxiv.org/abs/1602.07360, 2016.

  39. M. X. Tan, B. Chen, R. M. Pang, V. Vasudevan, M. Sandler, A. Howard, Q. V. Le. MnasNet: Platform-aware neural architecture search for mobil. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 2815–2823, 2019. DOI: https://doi.org/10.1109/CVPR.2019.00293.

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (Nos. 62172223 and 61671242), and the Fundamental Research Funds for the Central Universities (No. 30921013105).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Chen.

Ethics declarations

The authors declared that they have no conflicts of interest to this work.

Additional information

Colored figures are available in the online version at https://link.springer.com/journal/11633

Mingchao Li received the B. Sc. degree in engineering mechanics from School of Computer Science and Engineering, Nanjing University of Science and Technology, China in 2016. He is currently a Ph.D. degree candidate in control science and engineering at School of Computer Science and Engineering, Nanjing University of Science and Technology, China.

His research interests include medical image processing and deep learning.

Kun Huang received B. Sc. degree in computer science and technology from School of Computer Science and Technology, Zhejiang University of Technology, China in 2019. He is a Ph. D. degree candidate in control science and engineering at School of Computer Science and Engineering, Nanjing University of Science and Technology, China. His research interests include image generation and medical image processing.

Xiao Ma received M. Sc. degree in automation from School of Computer Science and Engineering, Nanjing University of Science and Technology, China in 2021. He is a Ph. D. degree candidate in computer science and technology at School of Computer Science and Engineering, Nanjing University of Science and Technology, China.

His research interests include weakly supervised learning and medical image processing.

Yuexuan Wang received B. Sc. degree in electronic information science and technology from School of Physics and Information Technology, Shaanxi Normal University, China in 2017. She is a Ph. D. degree candidate in pattern recognition and intelligent system at School of Computer Science and Engineering from Nanjing University of Science and Technology, China.

Her research interests include medical image processing and deep learning.

Wen Fan received M. D. degree in clinical medicine from Wuhan University, China in 2012. She is currently an associate chief physician and associate professor with First Affiliated Hospital of Nanjing Medical University, China.

Her research interests include retinal imaging and vitreoretinal diseases.

Qiang Chen received the B. Sc. degree in communication engineering and the Ph. D. degree in pattern recognition and intelligent system from Nanjing University of Science and Technology, China in 2002 and 2007, respectively. He held a post-doctoral position with Stanford University, USA from 2010 to 2011. He is currently a professor with Nanjing University of Science and Technology, China.

His research interests include image processing and analysis, machine learning.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M., Huang, K., Ma, X. et al. Mask Distillation Network for Conjunctival Hyperemia Severity Classification. Mach. Intell. Res. 20, 909–922 (2023). https://doi.org/10.1007/s11633-022-1385-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11633-022-1385-5

Keywords

Navigation