Abstract
In recent years, many neuroimaging studies have begun to integrate gradient-based explainability methods to provide insight into key features. However, existing explainability approaches typically generate a point estimate of importance and do not provide insight into the degree of uncertainty associated with explanations. In this study, we present a novel approach for estimating explanation uncertainty for convolutional neural networks (CNN) trained on neuroimaging data. We train a CNN for classification of individuals with schizophrenia (SZs) and controls (HCs) using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We apply Monte Carlo batch normalization (MCBN) and generate an explanation following each iteration using layer-wise relevance propagation (LRP). We then examine whether the resulting distribution of explanations differs between SZs and HCs and examine the relationship between MCBN-based LRP explanations and regular LRP explanations. We find a number of significant differences in LRP relevance for SZs and HCs and find that traditional LRP values frequently diverge from the MCBN relevance distribution. This study provides a novel approach for obtaining insight into the level of uncertainty associated with gradient-based explanations in neuroimaging and represents a significant step towards increasing reliability of explainable deep learning methods within a clinical setting.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
cae67{at}gatech.edu
robyn.l.miller{at}gmail.com
vcalhoun{at}gsu.edu
Funding is provided by NIH R01MH118695 and NSF 2112455.
We found a minor error in the analysis and have corrected that in the updated version. It does not fundamentally affect the findings or contributions of the paper.