Determining the optimal level of smoothing in cortical thickness analysis: A hierarchical approach based on sequential statistical thresholding
Introduction
Thickness is a descriptor of the mammalian neocortex that provides relevant information on the integrity of cortical columns and morphological correlates of higher cognitive functions (Von Economo, 1929, Jones & Peters, 1984). Computational neuroanatomy techniques are contributing substantially to in vivo measurements of human cortical thickness facilitating a topographical and quantitative description of atrophy patterns associated with prevalent neurological (Butman & Floeter, 2007, Charil et al., 2007, Biega et al., 2006, Singh et al., 2006, Lerch et al., 2005, Thompson et al., 2001) and psychiatric disorders (Makris et al., 2007, Shaw et al., 2006, Lyoo et al., 2006, Kuperberg et al., 2003). In addition, these techniques have been successfully applied to detect cortical descriptors that differentiate healthy aging (Sowell et al., 2003) from incipient neurodegenerative processes (Apostolova and Thompson, 2008). Thus, a different rate and topographical pattern of cortical thinning have been described in AD patients when compared with healthy elderly subjects, which seems relevant for the diagnosis and early detection of this prevalent neurodegenerative disease (Dickerson et al., 2009, Singh et al., 2006, Lerch et al., 2005, Thompson et al., 2003).
Cortical thickness maps are obtained by computational techniques either involving surfaces or voxels, or even a mixture of both. Our study is focused on surface-based methods that provide measures of the distance between surface-based models of the gray matter/white matter and the gray matter/CSF boundaries (Lerch & Evans, 2005, Fischl & Dale, 2000). Although the spatial resolution of these measures has drastically increased in the last few years, spatial blurring required by statistical testing results in decreased anatomical precision of cortical thinning estimations (Han et al., 2006, Lerch & Evans, 2005). Smoothing is typically performed by linearly filtering the cortical thickness maps with surface-based Gaussian kernel approximations after the maps are resampled to an average surface. These filters therefore act as low-pass spatial frequency filters along the average cortical manifold (Hagler et al., 2006, Chung et al., 2005). As a general rule, the larger the extent of smoothing the lower the spatial resolution of thickness measurements and accuracy at identifying cortical thinning. Note however that an appropriate level of smoothing can significantly increase the sensitivity of subsequent statistical analyses by increasing the signal-to-noise ratio in the statistical map.
Cortical surfaces are typically mapped onto the sphere to establish an intrinsic 2D spherical coordinate system. To take advantage of this natural surface parameterization, previous studies have approximated the Gaussian kernel filtering with heat kernel smoothing over the sphere (Chung et al, 2007). We further found that non-linear spherical wavelet-based denoising schemes improve the trade-off between anatomical precision and thinning detection achieved with surface-based approximations to Gaussian kernel smoothings (Bernal-Rusiel et al., 2008). The lower smoothing introduced by wavelet-based spatial filters together with their adaptive properties likely account for their better performance when compared with the Gaussian smoothing operators. In the present study, we smooth individual cortical thickness maps over the spherical average surface by using both the previously developed spherical wavelet-based denoising and the approximation to Gaussian kernel smoothing given by the iterative nearest neighbor averaging algorithm (Hagler et al., 2006, Han et al., 2006).
According to the matched filter theory (Pratt, 1991), the optimal extent of spatial smoothing can be determined by enhancing the matching with the putative area of change. However, it seems difficult to optimize this criterion in exploratory studies. Results from real (Han et al., 2006, Singh et al., 2006) and simulation studies (Bernal-Rusiel et al., 2008, Lerch & Evans, 2005) suggest that increasing smoothness of thickness maps not only enhances sensitivity but also reduces specificity and image resolution. Here sensitivity is defined as the probability of correctly identifying a vertex showing true thinning, while specificity is the probability of correctly rejecting a vertex that did not change. It seems, therefore, that improving the trade-off between sensitivity and specificity over the range of detection might be a preliminary approach to determine the optimal smoothing to apply in cortical thickness analysis. But computation of sensitivity and specificity requires knowing the number of true positive and true negative vertices detected respectively, which can only be determined in simulation studies. Alternatively, estimation of this trade-off could be based on estimation of the proportion of false positives among all the detected vertices and the number of true null hypotheses. Unfortunately, neither parametric random field methods (Worsley et al., 1996, Friston et al., 1994) nor nonparametric permutation-based tests (Hayasaka and Nichols, 2003) are able to control the error at the vertex level if the omnibus null hypothesis is false and cortical thickness maps were previously smoothed. The same constraint is applicable to vertex-wise FDR procedures. In fact, the smoothing typically extends the signal present in one particular vertex to many null vertices resulting in artificial inflation of the proportion of detected vertices that contain true signal (Chumbley and Friston, 2009).
Here we propose a simple hierarchical model to overcome this drawback when applying either linear (e.g., Gaussian kernel approximation) or non-linear (e.g., wavelet-based) smoothing to cortical thickness maps. More specifically, our approach, firstly, controls for false positives at the level of clusters, via either random field theory or permutation-based inference, and then at the level of vertices (within each significant cluster detected in the previous step) by applying an adaptive FDR procedure. We confirmed the superior performance of the proposed methodology (for both Gaussian and spherical wavelet smoothing) over other statistical thresholding approaches by means of simulation studies. We further validated the method in a cross-sectional study comparing moderate AD patients with healthy elderly subjects.
Section snippets
Subjects
The simulation study was performed on 66 healthy elderly subjects (age: 59–94 yr, 50 women) selected from the OASIS database. Inclusion criteria consisted of Mini-Mental State Examination (MMSE) with scorings ≥ 29 (high level of functioning), and Clinical Dementia Ratings (CDR) of 0 (no dementia). They were randomly assigned into the control (no changes in cortical thickness were introduced) or experimental group (included hybrid changes in cortical thickness). Gender (25 females and 8 males in
Assessing performance of different statistical approaches on simulated cortical thinning
Before adding synthetic changes to the cortical maps, thickness was compared between groups to ensure that no bias was accidentally introduced. As expected, no group differences in thickness were found for any thresholding method after applying either Gaussian or wavelet-based smoothing.
Performance of all statistical approaches at detecting group differences in simulated thinning in the whole brain varied with the level of smoothing. Fig. 3 illustrates how the trade-off between sensitivity and
Discussion
The main objective of the present study was to determine the optimal level of smoothing able to provide the best trade-off between sensitivity and specificity at detecting significant variations in cortical thickness maps. To achieve this goal we propose a sequential hierarchical methodology combining cluster- and vertex-based thresholding methods. The performance of hierarchical thresholding (HT) was compared with other widely used statistical inference procedures in both simulated and real
Acknowledgments
The authors want to thank the three anonymous reviewers of this paper for their helpful comments and insightful suggestions. We also wish to thank Dr. Randy Buckner (Harvard University), the Neuroinformatics Research Group (Washington University School of Medicine), and the Biomedical Informatics Research Network for making available the OASIS MRI dataset used in the present study. This research was supported by research grants from the Spanish Ministry of Science and Innovation (SAF2008-03300)
References (61)
- et al.
Mapping progressive brain structural changes in early Alzheimer's disease and mild cognitive impairment
Neuropsychologia
(2008) - et al.
In vivo mapping of gray matter loss with voxel-based morphometry in mild Alzheimer's disease
Neuroimage
(2001) - et al.
Detection of focal changes in human cortical thickness: spherical wavelets versus Gaussian smoothing
Neuroimage
(2008) - et al.
Focal cortical atrophy in multiple sclerosis: relation to lesion load and disability
Neuroimage
(2007) - et al.
False discovery rate revisited: FDR and topological inference using random fields
Neuroimage
(2009) - et al.
Deformation-based surface morphometry applied to gray matter deformation
Neuroimage
(2003) - et al.
Cortical thickness analysis in autism with heat kernel smoothing
Neuroimage
(2005) - et al.
Cortical surface-based analysis. I. Segmentation and surface reconstruction
Neuroimage
(1999) - et al.
Computing Fourier transforms and convolutions on the 2-sphere
Adv. Appl. Math.
(1994) - et al.
Cortical surface-based analysis: II. Inflation, flattening, and a surface-based coordinate system
Neuroimage
(1999)