Loading [a11y]/accessibility-menu.js
Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning | IEEE Journals & Magazine | IEEE Xplore

Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning


Abstract:

Glaucoma causes irreversible blindness. In 2020, about 80 million people worldwide had glaucoma. Existing machine learning (ML) models are limited to glaucoma prediction,...Show More

Abstract:

Glaucoma causes irreversible blindness. In 2020, about 80 million people worldwide had glaucoma. Existing machine learning (ML) models are limited to glaucoma prediction, where clinicians, patients, and medical experts are unaware of how data analysis and decision-making are handled. Explainable artificial intelligence (XAI) and interpretable ML (IML) create opportunities to increase user confidence in the decision-making process. This article proposes XAI and IML models for analyzing glaucoma predictions/results. XAI primarily uses adaptive neuro-fuzzy inference system (ANFIS) and pixel density analysis (PDA) to provide trustworthy explanations for glaucoma predictions from infected and healthy images. IML uses sub-modular pick local interpretable model-agonistic explanation (SP-LIME) to explain results coherently. SP-LIME interprets spike neural network (SNN) results. Using two different publicly available datasets, namely fundus images, i.e., coherence tomography images of the eyes and clinical medical records of glaucoma patients, our experimental results show that XAI and IML models provide convincing and coherent decisions for clinicians/medical experts and patients.
Article Sequence Number: 2509209
Date of Publication: 02 May 2022

ISSN Information:


Contact IEEE to Subscribe

References

References is not available for this document.