Abstract
Geographic atrophy (GA), mainly characterized by atrophy of the retinal pigment epithelium (RPE), is an advanced form of age-related macular degeneration (AMD) which will lead to vision loss. Automated and robust GA segmentation in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images is still an enormous challenge. This paper presents an automated and robust GA segmentation method based on object tracking strategy for time series SD-OCT volumetric images. Considering the sheer volume of data, it is unrealistic for experts to segment GA lesion region manually. However, in our proposed scenario, experts only need to manually calibrate GA lesion area for the first moment of each patient, and then the GA of the following moments will be automatically detected. In order to fully embody the outstanding features of GA, a new sample construction method is proposed for more effectively extracting histogram of oriented gradient (HOG) features to generate random forest models. The experiments on SD-OCT cubes from 10 eyes in 7 patients with GA demonstrate that our results have a high correlation with the manual segmentations. The average of correlation coefficients and overlap ratio for GA projection area are 0.9881 and 82.62%, respectively.
This work was supported by the National Natural Science Foundation of China (61671242, 61701192, 61701222, 61473310), Suzhou Industrial Innovation Project (SS201759), and the Open Fund Project of Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University) (No. MJUKF201706).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Geographic atrophy (GA) is an advanced stage of non-exudative age-related macular degeneration (AMD) that is a leading cause of progressive and irreversible vision loss among elderly individuals [1, 2]. Geographic atrophy, with loss of the retinal pigment epithelium (RPE) and choriocapillaris, is represented by the presence of sharply demarcated atrophic lesions, resulting in affecting the central vision field [3]. In most cases, GA lesions usually first appear in the surrounding macular, initially reserved for the concave center, and often expand and coalesce to include the fovea as time goes on [4]. Characterization of GA regions, helping clinicians to objectively monitor AMD progression, is essential step in the diagnosis of advanced AMD. However, this characterization is directly dependent on the precise segmentation and quantification of the areas affected by GA, as well as their properties [5]. Generally, manual segmentation of GA characterization is required, but it is time consuming and subject to user-variability. Thus, automatic and robust segmentation of GA-affected retinal regions is fundamental and important in the diagnosis of advanced AMD.
To the best of our knowledge, very few methods [7, 8] have been described for detection and segmentation of GA lesions in spectral-domain optical coherence tomography (SD-OCT) volumetric images. In previous works [6], clinicians just focus on qualitative evaluation of GA based on the thickness measurement of RPE Because of retinal thinning and loss of the retinal pigment epithelium (RPE) and photoreceptors in GA regions. But, that are not accurate in detecting GA. In order to identify the GA lesion directly through representing RPE, the early algorithms [8, 9] segment GA regions mainly based on the restricted projection image generated by the area between RPE and the choroid layers. Chen et al. [7] utilized a geometric active contour model to automatically detect and segment the extent of GA in the projection images. Level set approach was employed to segment GA regions in both SD-OCT and FAF images [8]. However, the result would be bad if the initialization was a bit incorrectly. Niu et al. [9] proposed an automated GA segmentation method for SD-OCT images by using a Chan-Vese model via local similarity factor to improve accuracy and stability. As mentioned above, these methods based on the restricted projection image have great limitations in the detection and segmentation of GA. Deep learning has achieved outstanding performance in many fields of computer vision application and medical image processing. Ji et al. [10] constructed a voting system with deep VGG16 convolutional neural networks to automatically detect GA in SD-OCT images. However, the deep neural network requires a large number of manually labeled training samples, and the voting system requires 10 training models, which is time-consuming.
This paper presents an automated and robust GA segmentation method based on the histogram of oriented gradient (HOG) [11] feature in time series SD-OCT volumetric images. Considering the sheer volume of data, it is unrealistic for experts to segment GA lesion region manually. In our proposed scheme, for time series, experts just need to manually calibrate GA lesion area for the first moment of each patient, then GA from the following moments will be automatically detected. Considering the characteristics of GA lesion in SD-OCT images, a new sample construction method is proposed for more effectively extracting HOG features to generate random forest models. According to GA segmentation results in OCT slices, quantitative evaluation of GA can be performed on OCT projection images.
2 Method
2.1 Method Overview
The whole framework of our method is shown in Fig. 1. For each patient, in the stage of image pre-processing, noise removal and layer segmentation are performed on each B-scan. We propose a new sample construction method which is conducive to the effective implementation of subsequent steps. Consequently, we divide sample-patches into positive and negative samples and extract HOG features. Finally, random forest is utilized to train a prediction model. In the testing phase, we first do the same processing on the testing data and obtain the GA segmentation result based on the random forest model trained by training phase.
2.2 Preprocessing
Denoise. In the OCT imaging process, a mass of speckle noise is caused because of random interference of scattered light. These noise flood the valid information in the image, thus the accuracy of the algorithm is greatly reduced. According to the noise distribution characteristics of SD-OCT images, this paper uses the bilateral filter algorithm to reduce noise. Figure 2(b) is the original image and Fig. 2(c) is the denoised image using bilateral filter.
Layer Segmentation. We just focus on the information between internal limiting membrane (ILM) and Choroid, hence, layer segmentation is necessary. Gradual intensity distance method [12] is used to segment the Bruch’s membrane (BM) layer and the ILM layer. As shown in the Fig. 2(d), the blue line is the ILM layer and the yellow line is the BM layer.
2.3 Samples Construction and Classification
Longitudinal data from 9 eyes in 7 patients, all the cases presented with advanced non-nonvascular AMD with extensive GA, were included in this paper. For a data set, two independent readers draw a manual outline by projecting the image in two repetitive separate sessions, and obtain ground truth segment outline from it by considering those areas that are outlined by two or more readers or sessions. In the preprocessing step, ILM layer and BM layer are depicted in B-scan, as shown in Fig. 3(a). In the GA region of the B-scan, there are bright pixels area under the RPE of the B-scan because RPE atrophies [7]. Regions of interest need to be restricted to the ILM and the lower areas of BM layers (100 pixels below the BM layer in this paper). As shown in Fig. 3(b), between the yellow line and the purple line, the GA regions increase the reflectivity compared to other regions. At the same time, there is a huge difference between the GA regions and other regions above the BM layer. The average distance between the blue line and the purple line (Fig. 3(b)) is calculated as the standard distance expressed by D. And then, the ILM layer be shifted down by D pixels as the lower boundary (Fig. 3(c)). Finally, we flatten the area between the top boundary and the lower boundary, which contains information about GA and other retina areas, and then a new image is obtained (Fig. 3(d)).
The new image of each B-scan are extracted to construct training and testing samples using sliding window method, and then we can extract HOG features from the samples. Experiment has proved 64 \(\times \) 128 (width \(\times \) height) is the best image size for extracting HOG feature [11]. Therefore, we resize all the preprocessed images to 512 \(\times \) 128. As shown in Fig. 4(a), the size of sample is 64 \(\times \) 128. For the lateral direction, the optimum step size is (in this paper). Tiny step size will lead to high similarity between training samples, which will reduce the efficiency and increase time cost. On the contrary, it will affect the accuracy of segmentation. The red line (Fig. 4(b)) indicates GA area. If the training sample is within the red manual division line, we mark it as the positive sample (Area2 in Fig. 4(b)). In contrast, we mark it as the negative sample (Area1 in Fig. 4(b)). However, when the sliding window contains positive and negative samples (Area3 in Fig. 4(b)), we mark it as the positive sample if the number of columns containing GA exceeds the half width of the sliding window, the negative sample or not. The formula of training samples for each B-scan is as follows:
where W is the width of B-scan, w is the width of sliding window, l is the step size of sliding window. Based on the above procedures and formula (\(W=512\), \(w=64\), \(l=8\)) we will get 7296 training samples for each SD-OCT volumetric image (57 samples are obtained in each B-scan) with 128 B-scans.
2.4 HOG Feature Extraction and Random Forest Model Construction
HOG descriptors provide a dense overlapping description of image regions [11]. The main idea of HOG feature is that the appearance and shape of local objects can be well described by the directional density distribution of gradients or edges. The HOG feature is formed by calculating and counting the gradient direction histograms of the local area of the image.
In this paper, we extend the traditional HOG feature. We firstly normalize the input image with Gamma standardization (formula (2)) to adjust image contrast, reduce the influence of local shadows, and suppress the noise interference. Then the gradient magnitude is computed by formula (3). In two-dimensional (2D) images, we need to calculate the gradient direction in x-y plan. Then the gradient direction in the x-y are calculated as formula (4).
where Gamma is set to 0.5 in formula (2), \(f_x(x,y)\) and \(f_y(x,y)\) represent the image gradients along the x, y directions in formulas (3) and (4), respectively.
Finally, the gradient direction of each cell will be divided into 9 directions in x-y plans. In this way, each pixel in the cell is graded in the histogram with a weighted projection (mapped to a fixed angular range) to obtain a gradient histogram, that is, an 9-dimensional eigenvector corresponding to the cell. The gradient magnitude is used as the weight of the projection. Then, the feature vectors of all cells in a block are concatenated to obtain the HOG feature. HOG features from all overlapping blocks are concatenated to the ultimate features for classification.
Random forest is an important ensemble learning method based on “Bagging”, which can be used for classification, regression and other issues. In this paper, HOG feature is used for training random forest model to accomplish the GA segmentation.
3 Experiments
Our algorithm was implemented in Matlab and ran on a 4.0 GHz Pentium 4 PC with 16.0 GB memory. We obtained a lot of SD-OCT volumetric image datasets from 10 eyes in 7 patients with GA to quantitatively test our algorithm. The SD-OCT cubes are 512 (lateral) \(\times \) 1024 (axial) \(\times \) 128 (azimuthal) corresponding to a \(6\times 6 \times 2\) mm\(^3\) volume centered at the retinal macular region generated with a Cirrus HD-OCT device. Several metrics [8] were used to assess the GA area differences: correlation coefficient (cc), the absolute area difference (ADD), overlap ratio (Overlap) evaluation.
The quantitative results in inter-observer and intra-observer agreement evaluation for this data set are summarized in Table 1, where A\(_i\) (\(\mathrm{{i}} = 1, 2\)) represents the segmentations of the first grader in the i-th session, and B\(_i\) (\(\mathrm{{i}} = 1, 2\)) represents the segmentations of the second grader in the i-th session. Inter-observer differences were computed by considering the union of both sessions for each grader: A\( _{1 \& 2}\) and B\( _{1 \& 2}\) represent the first and second grader, respectively. The intra-observer and inter-observer comparison showed very high correlations coefficients (cc) indicating very high linear correlation between different readers and for the same reader at different sessions. The overlap ratios (all > 90%) and the absolute GA area differences (all < 5%) indicate very high inter-observer and intra-observer agreement, highlighting that the measurement and quantification of GA regions in the generated projection images seem effective and feasible [9].
3.1 Qualitative Analysis
Comparison with Average Gold Standard. Figure 5 shows the GA segmentation results in B-scan where the green transparent areas represent the manual segmentation and the red lines are our automated segmentation results. Due to the characteristic of GA, it is difficult for the GA segmentation. However, our proposed method is effective to deal with many difficulties, such as (1) non-uniform reflectivity within GA ((b)(d)(e)(h)(i)), (2) influence of other retinal diseases ((b)(c)(e)(f)), and (3) the discontinuous of GA sizes ((c)(j)). Because our segmentation precision is high and robust in B-scan images, we can also obtain a relatively high segmentation precision in their projection images. Figure 6 shows the GA projection images collected at six patients. In Fig. 5, the red lines are the manual segmentation and the green lines are our segmentation in the projection images. It can be seen from Fig. 6 that our automated GA segmentation is similar with the manual segmentation.
Comparison with Traditional Methods. Figure 7 shows the comparison of GA segmentation results overlaid on projection images, where the outlines generated by average ground truth (red lines), Chen’s method (blue lines), Niu’s method (purple lines) and our method (green lines). In each subfigure, (a) and (c) shows the segmentation results overlaid on full projection images, (b) and (d) shows the enlarged view of the rectangles region marked by a white box. As shown in Fig. 7(b) and (d), both Chen’s and Niu’s methods failed to detect parts of the boundaries between GA lesions because of the impact of the low contrast. Comparatively, our method obtained higher consistency with the average ground truths.
Comparison with Deep Learning. Ji et al. [11] constructed a voting system with deep VGG16 convolutional neural networks to automatically detect GA in SD-OCT images. They trained ten deep network models, by randomly selecting training samples. Because the training samples are determined in our method, we only utilize one deep network to obtain GA segmentation results. Figure 8 shows the comparison of GA segmentation results in projection image, where the outlines generated by average ground truth (red lines), one deep network (yellow lines) and our method (green lines). In each subfigure, (a) and (c) shows the segmentation results overlaid on full projection images, (b) and (d) shows the enlarged view of the rectangles region marked by an white box. As shown in Fig. 8(b) and (d), one deep model misclassified normal regions as GA lesions. Moreover, our method not only obtained higher consistency with the average ground truths but also perform higher efficiency.
3.2 Quantitative Evaluation
Comparison with Traditional Methods. We quantitatively compared our automated results with tow traditional methods (Chen’s and Niu’s) and the manual segmentations drawn by 4 expert readers. Table 2 shows the agreement of GA projection area in the axial direction between each segmentation result and the ground truth (individual reader segmentations and the average expert segmentations). From Table 2, comparing each segmentation method to the manual outlines drawn in FAF images, we can observe that the correlation coefficient (0.9881 vs 0.970 and 0.979) and overlap ratio (82.64% vs 72.6% and 81.86%) of our method are high for GA projection area. The absolute area difference of our method is low (0.44 vs 1.44 and 0.81) which indicating the areas estimated by our method are closer to those manual productions.
Comparison with Deep Learning. We quantitatively compared our automated results with deep VGG16 convolutional neural networks (one deep model) and the manual segmentations drawn by 4 expert readers. Table 3 shows the agreement of GA projection area in the axial direction between the segmentation result and the ground truth (individual reader segmentations and the average expert segmentations). From Table 3, comparing segmentation method based on one deep model to the manual outlines drawn in FAF images, we can observe that the correlation coefficient (0.9881 vs 0.900) and overlap ratio (82.64% vs 72.8%) of our method are high for GA projection area. The absolute area difference of our method is low (0.44 vs 1.41) which indicating the areas estimated by our method are closer to those manual productions. Moreover, it is time consuming that deep learning methods require extensive training samples and labels to construct training model.
From Fig. 9, Tables 2 and 3, comparatively, the variance of the overlap rate of our method is smaller than other methods, so our method is more robust.
4 Conclusions
In this paper, we presented an automated and robust GA segmentation method based on object tracking strategy for time series SD-OCT volumetric images. In our proposed scenario, experts only need to manually calibrate GA lesion area for the first moment of each patient, and then the GA of the following moments will be automatically, robustly and accurately detected. In order to fully embody the outstanding features of GA, a new sample construction method is proposed for more effectively extracting HOG features to generate random forest models. The experiments on several SD-OCT volumetric images with GA demonstrate that our method shows good agreement when compared to manual segmentation by different experts at different sessions. The comparative experiments with semi-automated method, region-based C-V model and deep VGG16 convolutional neural network obtain more accurate GA segmentations, better stability and higher effectiveness.
References
Klein, R., Klein, B.E., Knudtson, M.D., Meuer, S.M., Swift, M., Gangnon, R.E.: Fifteen-year cumulative incidence of age-related macular degeneration: the Beaver Dam Eye Study. Ophthalmology 114(2), 253–262 (2007)
Schatz, H., McDonald, H.R.: Atrophic macular degeneration. Rate of spread of geographic atrophy and visual loss. Ophthalmology 96(10), 1541–1551 (1989)
Bhutto, I., Lutty, G.: Understanding age-related macular degeneration (AMD): relationships between the photoreceptor/retinal pigment epithelium/Bruch’s membrane/choriocapillaris complex. Mol. Aspects Med. 33(4), 295–317 (2012)
Sunness, J.S., et al.: Enlargement of atrophy and visual acuity loss in the geographic atrophy form of age-related macular degeneration. Ophthalmology 106(9), 1768–1779 (1999)
Chaikitmongkol, V., Tadarati, M., Bressler, N.M.: Recent approaches to evaluating and monitoring geographic atrophy. Curr. Opin. Ophthalmol. 27, 217–223 (2016)
Folgar, F.A., Age Related Eye Disease Study 2 Ancillary Spectral-Domain Optical Coherence Tomography Study Group, et al.: Drusen volume and retinal pigment epithelium abnormal thinning volume predict 2-year progression of age-related macular degeneration. Ophthalmology 123(1), 39–50 (2016)
Chen, Q., de Sisternes, L., Leng, T., Zheng, L., Kutzscher, L., Rubin, D.L.: Semi-automatic geographic atrophy segmentation for SD-OCT images. Biomed. Opt. Express 4(12), 2729–2750 (2013)
Hu, Z., Medioni, G.G., Hernandez, M., Hariri, A., Wu, X., Sadda, S.R.: Segmentation of the geographic atrophy in spectral-domain optical coherence tomography and fundus autofluorescence images. Invest. Ophthalmol. Vis. Sci. 54(13), 8375–8383 (2013)
Niu, S., de Sisternes, L., Chen, Q., Leng, T., Rubin, D.L.: Automated geographic atrophy segmentation for SD-OCT images using region-based CV model via local similarity factor. Biomed. Opt. Express 7, 581–600 (2016)
Ji, Z., Chen, Q., Niu, S., Leng, T., Rubin, D.L.: Beyond retinal layers: a deep voting model for automated geographic atrophy segmentation in SD-OCT images. Transl. Vis. Sci. Technol. 7(1), 2063 (2018)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1063–6919 (2005)
Chen, Q., Fan, W., Niu, S., Shi, J., Shen, H., Yuan, S.: Automated choroid segmentation based on gradual intensity distance in HD-OCT images. Opt. Express 23(7), 8974–8994 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, Y., Niu, S., Ji, Z., Chen, Q. (2018). Automated and Robust Geographic Atrophy Segmentation for Time Series SD-OCT Images. In: Lai, JH., et al. Pattern Recognition and Computer Vision. PRCV 2018. Lecture Notes in Computer Science(), vol 11256. Springer, Cham. https://doi.org/10.1007/978-3-030-03398-9_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-03398-9_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03397-2
Online ISBN: 978-3-030-03398-9
eBook Packages: Computer ScienceComputer Science (R0)