Skip to main content

Interpretable Gender Classification from Retinal Fundus Images Using BagNets

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (MICCAI 2021)

Abstract

Deep neural networks (DNNs) are able to predict a person’s gender from retinal fundus images with high accuracy, even though this task is usually considered hardly possible by ophthalmologists. Therefore, it has been an open question which features allow reliable discrimination between male and female fundus images. To study this question, we used a particular DNN architecture called BagNet, which extracts local features from small image patches and then averages the class evidence across all patches. The BagNet performed on par with the more sophisticated Inception-v3 model, showing that the gender information can be read out from local features alone. BagNets also naturally provide saliency maps, which we used to highlight the most informative patches in fundus images. We found that most evidence was provided by patches from the optic disc and the macula, with patches from the optic disc providing mostly male and patches from the macula providing mostly female evidence. Although further research is needed to clarify the exact nature of this evidence, our results suggest that there are localized structural differences in fundus images between genders. Overall, we believe that BagNets may provide a compelling alternative to the standard DNN architectures also in other medical image analysis tasks, as they do not require post-hoc explainability methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ayhan, M.S., Kühlewein, L., Aliyeva, G., Inhoffen, W., Ziemssen, F., Berens, P.: Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection. Med. Image Anal. 64, 101724 (2020)

    Article  Google Scholar 

  2. Ayhan, M.S., et al.: Clinical validation of saliency maps for understanding deep neural networks in ophthalmology. medRxiv (2021)

    Google Scholar 

  3. Brendel, W., Bethge, M.: Approximating CNNs with bag-of-local-features models works surprisingly well on imagenet. In: International Conference on Learning Representations (2019)

    Google Scholar 

  4. Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras

  5. Chueh, K.M., Hsieh, Y.T., Chen, H.H., Ma, I.H., Huang, S.L.: Prediction of sex and age from macular optical coherence tomography images and feature analysis using deep learning. medRxiv (2020)

    Google Scholar 

  6. Costa, P., et al.: EyeQual: accurate, explainable, retinal image quality assessment. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 323–330 (2017)

    Google Scholar 

  7. De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342 (2018)

    Article  Google Scholar 

  8. Delori, F.C., Goger, D.G., Keilhauer, C., Salvetti, P., Staurenghi, G.: Bimodal spatial distribution of macular pigment: evidence of a gender relationship. JOSA A 23(3), 521–538 (2006)

    Article  Google Scholar 

  9. Dieck, S., et al.: Factors in color fundus photographs that can be used by humans to determine sex of individuals. Transl. Vis. Sci. Technol. 9(7), 8–8 (2020)

    Article  Google Scholar 

  10. Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)

    Article  Google Scholar 

  11. Grote, T., Berens, P.: On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics 46(3), 205–211 (2020)

    Article  Google Scholar 

  12. Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)

    Article  Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Iqbal, H.: PlotNeuralNet (2018). https://github.com/HarisIqbal88/PlotNeuralNet. Accessed 26 Feb 2021

  15. Kiani, A., et al.: Impact of a deep learning assistant on the histopathologic classification of liver cancer. npj Digit. Med. 3(1), 1–8 (2020)

    Google Scholar 

  16. Kobak, D., Berens, P.: The art of using t-SNE for single-cell transcriptomics. Nat. Commun. 10(1), 1–14 (2019)

    Article  Google Scholar 

  17. Kobak, D., Linderman, G., Steinerberger, S., Kluger, Y., Berens, P.: Heavy-tailed kernels reveal a finer cluster structure in t-SNE visualisations. In: Brefeld, U., Fromont, E., Hotho, A., Knobbe, A., Maathuis, M., Robardet, C. (eds.) ECML PKDD 2019. LNCS (LNAI), vol. 11906, pp. 124–139. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46150-8_8

    Chapter  Google Scholar 

  18. Li, D., et al.: Sex-specific differences in circumpapillary retinal nerve fiber layer thickness. Ophthalmology 127(3), 357–368 (2020)

    Article  Google Scholar 

  19. Linderman, G.C., Rachh, M., Hoskins, J.G., Steinerberger, S., Kluger, Y.: Fast interpolation-based t-SNE for improved visualization of single-cell RNA-seq data. Nat. Methods 16, 243–245 (2019)

    Article  Google Scholar 

  20. Maaten, L.V.D., Hinton, G.E.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  21. McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020)

    Article  Google Scholar 

  22. Montavon, G., Samek, W., Müller, K.R.: Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 73, 1–15 (2018)

    Article  MathSciNet  Google Scholar 

  23. O’Hara, S., Draper, B.A.: Introduction to the bag of features paradigm for image classification and retrieval. arXiv preprint arXiv:1101.3354 (2011)

  24. Paschali, M., Naeem, M.F., Simson, W., Steiger, K., Mollenhauer, M., Navab, N.: Deep learning under the microscope: improving the interpretability of medical imaging neural networks. arXiv preprint arXiv:1904.03127 (2019)

  25. Poplin, R., et al.: Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158–164 (2019)

    Article  Google Scholar 

  26. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  27. Sudlow, C., et al.: UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12(3), e1001779 (2015)

    Article  Google Scholar 

  28. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  29. Yamashita, T., et al.: Factors in color fundus photographs that can be used by humans to determine sex of individuals. Transl. Vis. Sci. Technol. 9(2), 4–4 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

We thank Wieland Brendel for his support with BagNets. This research was supported by the German Ministry of Science and Education (BMBF, 01GQ1601 and 01IS18039A) and the German Science Foundation (BE5601/4-2 and EXC 2064, project number 390727645). Hanna Faber received research funding from the Junior Clinician Scientist Program of the Faculty of Medicine, Eberhard Karls University of Tübingen, Germany (application number 463–0–0). Additional funding was provided by Novartis AG through a research grant. The funding bodies did not have any influence in the study planning and design. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Indu Ilanchezian.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Murat Seçkin Ayhan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ilanchezian, I., Kobak, D., Faber, H., Ziemssen, F., Berens, P., Ayhan, M.S. (2021). Interpretable Gender Classification from Retinal Fundus Images Using BagNets. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12903. Springer, Cham. https://doi.org/10.1007/978-3-030-87199-4_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87199-4_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87198-7

  • Online ISBN: 978-3-030-87199-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics