Abstract
Automatic extraction of blood vessels is an important step in computer-aided diagnosis in ophthalmology. The blood vessels have different widths, orientations, and structures. Therefore, the extracting of the proper feature vector is a critical step especially in the classifier-based vessel segmentation methods. In this paper, a new multi-scale rotation-invariant local binary pattern operator is employed to extract efficient feature vector for different types of vessels in the retinal images. To estimate the vesselness value of each pixel, the obtained multi-scale feature vector is applied to an adaptive neuro-fuzzy inference system. Then by applying proper top-hat transform, thresholding, and length filtering, the thick and thin vessels are highlighted separately. The performance of the proposed method is measured on the publicly available DRIVE and STARE databases. The average accuracy 0.942 along with true positive rate (TPR) 0.752 and false positive rate (FPR) 0.041 is very close to the manual segmentation rates obtained by the second observer. The proposed method is also compared with several state-of-the-art methods. The proposed method shows higher average TPR in the same range of FPR and accuracy.
Similar content being viewed by others
1 Introduction
The detection and quantitative measurement of variations in the retinal blood vessels can be used for diagnosis of several diseases such as diabetic retinopathy, hypertension, occlusion, glaucoma, obesity, etc. For example, vessel occlusion makes vessels longer; hypertension reduces arteries, while diabetes creates new blood vessels. Therefore, several blood vessel detection methods can be found in the literature for diagnosis of such diseases [1–7]. Also, the retinal blood vessel distribution is unique for each person, and therefore, it could be used for personal identification [8].
Developments of acquisition equipments enable us to capture high-resolution images from retina. Therefore, manual or semiautomatic blood vessel extraction techniques are labor intensive and time consuming, especially in large database of retinal images. Thus, the developments of automatic methods for robust blood vessel extraction are valuable. In the literature, several techniques have been reported for blood vessel segmentation. These methods generally can be categorized into three classes: (1) kernel-based, (2) tracking-based, and (3) classifier-based.
In the kernel-based methods, the retinal images are filtered by various vessel-like kernels. The blood vessel structures are detected by maximizing the responses of applied kernels. The mathematical morphology operators [6, 9] and matched filters [1, 2, 10, 11] are two examples of this category. In the matched filters, a series of different Gaussian-shaped filters like simple Gaussian model [1, 2], dual-Gaussian model [10], or derivative of Gaussian function [11] are used to detect the blood vessels. However, the matched filters have strong responses not only to blood vessels but also to non-vessel edges like bright blobs. They also have to use several kernels to detect vessels with different thickness orientations.
In the tracking-based methods, the vessel seems as a line and they try to follow vessel edges by exploiting local information. In these methods, various vessel profile models such as Gaussian profile [12], generic parametric model [13], Bayesian probabilistic model [14], and multi-scale profile [15] are used to find the path that has the best matches to the vessel profile model. Although these methods have high performance in detecting blood vessels, they usually have 2 week spots: the limitation in handling of bifurcations especially in thin vessels and the needing of manual seek points.
The classifier-based methods are divided into two subclasses: supervised and unsupervised. In the supervised methods [16–18], some prior information of the labeled vessels is exploited to decide whether a pixel belongs to vessel or non-vessel. For this propose, different classifiers such as artificial neural network [16], Gaussian mixture model classifier [17], and KNN classifier [18] were used. In the unsupervised methods, the vessel segmentation is done without any prior labeling knowledge [19, 20]. In the classifier-based methods, the performance of detected vessels heavily depends on the features that are extracted from retinal images. Various types of feature extraction methods such as Gabor wavelet transform [17], ridge detection [18], matched filters [19], and trench detection [20] were reported in the literature.
Other techniques tried to combine these methods and improve the performance [21–23]. Mendonca et al. [21] used morphological operators and region growing algorithm, while Palomera-Perez et al. [22] and Martinez-Perez et al. [23] employed Hessian-based vesselness and region growing techniques to extract blood vessels.
In this paper, an efficient and easy-to-implement classifier-based method is presented for automatically extracting blood vessels. An adaptive neuro-fuzzy inference system (ANFIS) is used as classifier, and a proper extension of local binary pattern (LBP) operator is employed to extract multi-scale statistical and structural features of blood vessels. The combination of ANFIS and LBP is used to calculate the vesselness measure of each pixel in retinal images. A proper and simple procedure is applied in the postprocessing phase to extract the thin and thick vessels separately. By applying length filter on the thin and thick vessels and integrating them, the retinal blood vessel network is detected.
The rest of this paper is organized as follows: a brief review of adaptive neuro-fuzzy inference system and local binary patterns is presented in Sects. 2 and 3, respectively. The proposed method for robust blood vessel detection is presented in Sect. 4. Experimental results are reported in Sect. 5. And finally conclusion is given in Sect. 6.
2 Adaptive neuro-fuzzy inference system
The fuzzy logic, proposed by Zadeh [24], not only can be used as a control methodology but also can be employed as a data processing tool. Unlike binary logic, which is based on crisp values of 0 (“false”) and 1 (“true”), fuzzy logic uses a degree of truth by using membership functions, rules, and fuzzy logic operators. By using of membership functions, it would be possible to determine the weight of each input to define final output. The final output will be obtained using fuzzy “if–then” rules. These rules combine the various dependencies between input variables using fuzzy logic operators to describe the final output.
The most critical issue in the fuzzy systems is appropriately determining their parameters such as the shape and location of membership functions and the fuzzy rules composition. In addition to using trial-and-error method, one can use learning methods such as artificial neural network to obtain optimal fuzzy logic parameters from training data. An adaptive neuro-fuzzy inference system (ANFIS) was obtained by combination of neural network and fuzzy inference system [25]. In the ANFIS, either backpropagation or combination of least square estimate and backpropagation may be used to estimate membership function parameters. Although in the fuzzy inference system, both premise (if part) and consequence (then part) parts of fuzzy if–then rules can be fuzzy proposition, in the ANFIS, the consequence part is a zero- or first-order polynomial. Such kind of models are called Sugeno-type fuzzy model [26]. For a first-order Sugeno fuzzy model, a common rule set with two fuzzy if–then rules is as follows:
The corresponding equivalent ANFIS structure and its reasoning mechanism are shown in Fig. 1. This network has two kinds of nodes: fixed nodes and adaptive nodes. The adaptive nodes, which are depicted by rectangles, contain parameters that may be trained using learning algorithm, while fixed nodes, which are depicted by circles, are constant and do not contain any parameters.
In the Fig. 1, the first layer consists of adaptive neurons used to determine the extent of membership function of each fuzzy set.
The second layer consists of fixed nodes that simply perform multiplication operation.
where \( O_{2,i} \) is the output of the ith node in the layer 2 and \( \mu_{S} \) is an appropriate parameterized membership function for fuzzy set S. \( \mu_{{A_{i} }} (x) \) and \( \mu_{{B_{i} }} (y) \) are the output of the first-layer nodes that specify the membership value of each input x or y to its corresponding fuzzy set (A i for x and B i for y), respectively.
In the third layer, constant nodes are used to normalize the outputs of the previous layers’ nodes as below:
The fourth layer consists of adaptive neurons which they compute a weighted first-order polynomial function as below:
where p i , q i , and r i are its parameters obtained during the training process.
The last layer include a single fixed neuron which it collects the outputs of the nodes in the previous layer.
In this paper, a hybrid learning method is used to estimate the parameters of adaptive neurons. In this type of learning, in the first step, the parameters of neurons in the first layer are set to fix random values. Then the parameters of neurons in the fourth layer are trained with least square error method. In the next step, these trained parameters are considered as constants, and the neurons in the first layer are trained with error backpropagation gradient descent algorithm. These steps are iterated till the condition of stopping is satisfied.
3 Local binary pattern operator
Local binary pattern, which was proposed by Ojala et al. [27, 28], is a very effective multi-resolution statistical and structural texture primitives descriptor that can be applied in many applications such as face recognition [29], fingerprint classification [30], and remote sensing analysis [31]. In the LBP operator, the primitive patterns are extracted by comparing the value of P equally spaced neighborhood points (g i i = 0 to P−1) on the circle (with radius R) with the value of central pixel (g c ). The primitive patterns are represented with binary codes (BC P,R ):
If the position of each neighborhood point (g i ) does not fall into the center of a pixel, we rounded it to fall into the center of a nearest pixel. But one can obtain the value of g i by using the interpolation of the corresponding pixels. The reason for using the rounding process is to speedup the calculation of the proposed LBP. In the classical LBP (LBPriu2) [28], only uniform patterns are selected as local texture features. The uniform patterns contain at most two bitwise transitions from 0 to 1 or vise versa in the obtained binary code (T(BC P,R )) when it is considered as a circular structure:
where
The uniformity measure T corresponds to the number of transitions from 0 to 1 or from 1 to 0 between successive bits in circular representation of the obtained binary code (BC P,R ). The superscript “riu2” refers to the use of rotation-invariant uniform patterns that have a T value of at most two. The classical LBP is rotation invariant, because it assigns a unique label to each pattern based on the number of its “1” bits, and the placement of “1” bits does not have any effect on the LBP outputs. An example of calculating the LBP value is shown in Fig. 2.
By applying this operator, only uniform pattern such as flat area, spots, corners, line-ends, and edges, which is shown in Fig. 3, can be extracted, and all non-uniform patterns are neglected by integrating them as one pattern with label P + 1.
Since in the retinal images, the blood vessel structure is line pattern with T value greater than 2, the classical LBPriu2 cannot efficiently describe it. Therefore, we used an extension of LBP (LBPNE), proposed by authors [32], which can describe the line patterns efficiently. The formulation of this version of LBP is given in below:
Since the patterns with line-shaped structures have four bitwise transitions in their binary code (T(BC P,R ) = 4), as shown in Fig. 4, in this version of LBP, the line patterns are noticed separately. And also, for the other non-uniform patterns instead of assigning one label to all of them, we use one label for each group of them that have same bitwise transitions (T) value.
By employing different values for P and R, we can extract multi-resolution patterns as shown in Fig. 4. The value of R (R > 0) is referred to radius of circle that P (P > 1) equally spaced neighbor points are considered on it to extract the LBP values. Although each value for P can be used, the best value for P is equal to the number of pixels that exists in the perimeter of the corresponding circle to utilize all vessels’ points for extracting the LBP values. By detecting multi-resolution patterns in the retinal images, the efficient feature vectors for blood vessels with different diameters can be extracted easily.
4 The proposed blood vessel detection method
In this paper, a robust method for automatic blood vessel extraction is introduced. In the first step, a new and efficient rotation-invariant LBP operator (\( {\text{LBP}}_{P,R}^{\text{NE}} \)) with different values for P and R is applied to extract multi-scale feature vector for all pixels in the retinal images. Next, the obtained feature vectors are applied to the trained ANFIS to indicate the vesselness value of each pixel. Then thin and thick vessels are separately extracted by applying simple and proper postprocessing procedures. Finally, the blood vessel networks are obtained by applying simple logical OR operation on the detected thin and thick vessels. The details of these steps are given in the following of this section. Also the flowchart of the proposed system is depicted in Fig. 5.
4.1 LBP feature extraction
When the colored images of retinal vessels and their red, green, and blue channels are visualized separately, as shown in Fig. 6, the green channel shows the best vessel/background contrast. Therefore, this channel is selected to be processed by the LBPNE operators. Since the width of blood vessels in the retinal images with size of about 700 × 600 pixels is usually in the range of 2 and 10 pixels, the LBPNE operators in three scales \( {\text{LBP}}_{18,3}^{\text{NE}} \), \( {\text{LBP}}_{32,5}^{\text{NE}} \), and \( {\text{LBP}}_{48,9}^{\text{NE}} \)are applied to each pixel to cover all vessels’ widths. For other data sets, that their maximum width of vessels are greater than 10 pixels, these parameters should be set based on the maximum value of vessels’ width, to span all vessels’ widths. Another choice is employing a resizing algorithm to resize the images to about 700 × 605 pixels. The obtained values for these LBP operators and their corresponding bitwise transition (T) values are used as multi-scale feature vector:
This feature vector, which can reflect the characteristics of different vessels, is extracted for all pixels in the retinal images and then applied to the trained ANFIS (see Sect. 4.2) to estimate the vesselness values of them.
4.2 Vesselness degree measurement using ANFIS
To estimate the vesselness value of each pixel, an adaptive neuro-fuzzy inference system (ANFIS) is employed. The architecture of the used ANFIS is shown in Fig. 7. The training data set is directly extracted from the real retinal images. To this end, we selected five images from the training set of the DRIVE data set [34]. We randomly selected 100000 vessel and non-vessel points from the selected training images. For each point, a feature vector as explained in the Eq. 11 was extracted. The output of ANFIS is set in the range of 1 and −1: the value 1 for vessel and −1 for background. The ANFIS is trained using a combination of the least squares and the backpropagation gradient descent method to emulate training data set. In this type of learning, in the first step, the parameters of input membership functions (IMF neurons) are set to fix random values. Then the parameters of output membership functions (OMF neurons) are trained with least square error method. In the next step, these trained parameters are considered as constants, and the neurons in the IMF layer are trained with error backpropagation gradient descent algorithm. These steps are iterated till the condition of stopping is satisfied.
In the test phase, the LBP feature vectors will be extracted for all pixels in the input retinal images and applied to the trained ANFIS to indicate the vesselness degree of them. To reduce the effect of noise, a simple uniform averaging filter with 5 × 5 kernel structure is applied to the obtained vesselness values. The results of this step, shown in Fig. 6e, are used to enhance and detect thin and thick vessels separately.
4.3 Thin vessel enhancement
To extract thin vessels, the morphological top-hat operator with suitable circular structuring elements is employed. Circular structuring elements of radii 2 and 4 are applied to the obtained vesselness values to highlight the thin vessels with a specific range of widths. The final thin vessels are extracted by applying global threshold value, which was proposed by Otsu [33] to select the threshold value such that minimizes the intraclass variance in the output binary images. Since several small regions of non-vessels may be extracted, and a proper length filter is also applied to eliminate them. The obtained result of this phase is shown in Fig. 6f.
4.4 Thick vessel enhancement
The thick vessels are extracted by applying a proper thresholding process followed by a simple length filtering. Since, the ratio of blood vessel pixels in the retinal images is less then 15 %, the threshold value (TV) is adopted such that its value to be greater than 85 % of existing vesselness values. For this purpose, we use cumulative density function (CDF) of the obtained vesselness values to obtain threshold value as below:
where k is the quantized vesselness values. After applying the obtained threshold value, a proper length filter is also applied to eliminate small regions. The obtained results for thick vessels are shown in Fig. 6g.
4.5 Label filtering for small region removing
To eliminate small regions, connected component labeling is used to identify individual objects in the thin and thick vessel images. Connected component labeling is a simple image analysis technique that scans an image pixel-by-pixel and groups its pixels into components based on pixel connectivity. The label filtering is employed to isolate the individual objects by using the 4-connected neighborhood and label propagation. The number of pixels, in the labeled components, is used as a measure of length feature of regions. If the area of the region is smaller than a certain value then that region will be removed. We experimentally tried different values for eliminating small regions from the thin and thick vessels, and we found that the best limit for thin vessels is 60 and 150 and for thick vessels is 300 pixels. These values were obtained for retinal images with size of about 700 × 600 pixels. The details of these experiments are given in Sect. 5.2.
4.6 Final vessel network detection
The final blood vessels are obtained by integrating of the thin and thick networks using logical OR function. The obtained vessels for final blood vessel network are shown in Fig. 6h.
5 Experimental results
In the first section of our experiments, the effect of different parameters of the proposed method was evaluated on images from publicly available DRIVE database [34]. The DRIVE database consists of 40 images along with manual segmentation of vessels. It has been divided into training and test sets, each of which contains 20 images. These images are captured in digital form using a Canon CR5 3CCD camera at 45° field of view (FOV). The size of images is 565 × 584 pixels and used 8 bits per each color channel.
To evaluate the proposed method, we used detection accuracy (ACC), true positive rate (TPR), and false positive rate (FPR) as performance measures. The ACC is defined as the ratio of the number of correctly classified pixels to the total number of existing pixels. The TPR is defined as the ratio of the number of correctly detected vessel pixels to the total number of vessel pixels that exist in the ground truth images. The FPR is also defined as the ratio of the number of non-vessel pixels were classified as vessels to the total number of non-vessel pixels.
The training of the ANIFS was done using the images 1–5 of the training set of the DRIVE database. A combination of the least-squares method and the backpropagation gradient descent method for training FIS membership function parameters is applied to emulate a given training data set as explained in the Sect. 4.2. The hand-labeled images by the first expert human were used as ground truth.
5.1 Experiment on CDF-based thresholding
This experiment was done to evaluate the effect of CDF-based threshold on the proposed method. This threshold was employed to vesselness values to extract thick vessels. We evaluated TPR and FPR of the proposed method when different values for CDF-based threshold have been used. We used the images 6–20 from the training set of the DRIVE database in this experiment. Figure 8 illustrates the obtained results. From the figure, when the value of K was reduced from 1 to 0.85, the variation in TPR is more than FPR; and from 0.85 to 0.7, the value of TPR is fixed, and only FPR is increased. Therefore, a good trade-off between the TPR and FPR values is obtain when the CDF is equal to 0.85. Base on this experiment, we used the vesselness value (k) of this point as threshold value for detection of thick vessels. Also the ROC curve of the proposed method, when only the CDF threshold was changed, was extracted from this figure and shown in Fig. 9 for better understanding of the effect of CDF threshold.
5.2 Experiment on the size of length filters
To evaluate the effect of length filtering on the proposed method, we applied different values from 0 to 500 pixels for thin and thick vessels. We separately calculated the accuracy (ACC) of thin and thick vessels when different values for length filtering were used. In this experiment, the images 6–20 from the training set of the DRIVE database were used. Figure 10 illustrates the obtained results. From the figure, the best accuracy for thin vessels with radii 2 and 4 is obtained when the size of length filter is equal to 60 and 150 pixels, respectively. Also the best accuracy for thick vessels obtains when the size of length filter is equal to 300 pixels.
5.3 Experiment on feature vector
To evaluate the benefit of the proposed LBP operator and selected feature vector, we implemented the proposed method using different feature vectors obtained by RGB values, \( {\text{LBP}}^{\text{riu2}} \) values, and the proposed LBP (\( {\text{LBP}}^{\text{NE}} \)) values. For both LBP operators, not only the obtained LBP values but also the combination of LBP values and their transition values (T) were used as feature vector. In these experiments to train the ANFIS, the images 1–5 from the training set of the DRIVE database were used, and then all images in the test set of the DRIVE database were used as test samples. The obtained results are shown in Table 1. From the obtained results, it is clear that the using of LBP operator is superior to RGB value. The highest performance rate was achieved when the combination of the proposed LBP values (LBPNE) and their transition values (T) were used. It is better in all performance measures and outperforms 2 % in the TPR greater than the classical LBP (LBPriu2).
5.4 Comparison with other methods using the DRIVE database
To emphasize the ability of the proposed method, we compared it with some state-of-the-art blood detection methods on all images in the test set of the DRIVE database [34]. For this purpose, the methods proposed by Chaudhuri et al. [1], Niemeijer et al. [3], Jiang et al. [9], Zhang et al. [10], Delibasis et al. [13], Soares et al. [17], Stall et al. [18], Mendonca et al. [21], Palomera-Perez et al. [22], and Martinez-Perez et al. [23] were used for comparison. The results of other methods can be obtained from the DRIVE database web site [34] or from their original papers. These results are summarized in Table 2.
The TPR of the proposed method is higher than others, while its FPR dose not exceed from 3.91 %. Also the obtained results on the image 16 of the DRIVE database for the proposed method and some state-of-the-art methods for better comparison are shown in Fig. 11.
5.5 Comparison with other methods using the STARE database
The proposed method was also compared on the STARE database [2] with some state-of-the-art methods. We selected 20 images from the STARE database that ten of which contain pathology. These images are captured in digital form using a TopCon TRV-50 fundus camera at 35° field of view (FOV). The size of images is 700 × 605 pixels and used 8 bits per each color channel. Two observers manually segmented all images. The performance of all methods is compared with first observer as ground truth. The previous trained ANIFS, which was trained using DRIVE images, was used again to assess the robustness of the proposed method. In this experiment, the methods proposed by Chaudhuri et al. [1], Hoover et al. [2], Stall et al. [18], Soares et al. [17], Martinez-Perez et al. [23], Mendonca et al. [21], Palomera-Perez- et al. [22] and Zhang et al. [10] were used for comparison. The results of other methods are extracted from their original papers. The obtained results are presented in Table 3.
In the obtained results, the TPR value of the proposed method is 75.9 % and higher than others while its FPR value dose not exceed from 4.4 %. The accuracy value of the proposed method is similar to the others. The obtained results of the proposed method on four images of the STARE database are also shown in Fig. 12. Since in this experiment, the test set and training set are completely independent, the obtained results show the robustness of the proposed method.
To perform a fair comparison, the TPR values of the proposed method and some state-of-the-art methods at the same FPR values on the both DRIVE and STARE databases are presented in Tables 4 and 5. The methods proposed by Chaudhuri et al. [1], Hoover et al. [2], Niemeijer et al. [3], Zana et al. [6], Jiang et al. [9], Zhang et al. [10], Delibasis et al. [13], Soares et al. [17], Stall et al. [18], Palomera-Perez- et al. [22], and Martinez-Perez et al. [23] were used. For each method, the TPR value directly was extracted from its ROC curve. From these tables, the proposed method has high TPR values compared to most of existing methods and competes with the best existing method on the both DRIVE and STARE databases. Its average TPR value is greater than 75 %.
Furthermore, the proposed method requires low computational cost and competes with existing fast methods, see Table 6. Without optimization of its MATLAB code, it will take about 3.7 min to process one image in the DRIVE database and 4.3 min to process one image in the STARE database on a PC with a Pentium-IV 3.2 GHz CPU and 2.0 GB RAM. These running times are obtained by averaging the running times of all images of the DRIVE and STARE databases. In real applications, the computation time can be significantly reduced by implementing the algorithm in C/C++ programming.
6 Conclusion
In this paper, we proposed a novel and easy-to-implement algorithm for automatic blood vessel extraction, which combines the multi-resolution LBP operator and adaptive neuro-fuzzy inference system. Since it uses multi-scale features, which are obtained using LBP, all vessels with different thicknesses and orientations can be detected efficiently. In the proposed method, the thin and thick blood vessels are extracted separately by applying top-hat transform and simple thresholding as well as length filtering. The final vessels are obtained by combining the thin and thick vessels using logical OR function.
Experiments on different test images from the DRIVE and STARE databases are conducted to access the performance of the proposed method in comparison with some of the best state-of-the-art methods. The proposed method is competitive with or better than other state-of-the-art methods. On the DRIVE and STARE databases, the TPR value of the proposed method is 74.4 and 75.9 %, respectively, while its FPR value is 3.9 and 4.3 %, respectively. The overall accuracy of the proposed method is greater than 94 %. And also, the running time of the proposed method competes with existing fast methods. It can process one image in 3.7 min.
To improve the performance of the proposed method and reduce its FPR value, we need to use more complex postprocessing procedure and also use more efficient LBP operator to extract line, junction, as well as bifurcation patterns. We will further investigate these aspects in our future works.
References
Chaudhuri S, Chatterjee S, Katz N, Nelson M, Goldbaum M (1989) Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans Med Imaging 8(3):263–269
Hoover A, Kouznetsova V, Goldbaum M (2000) Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans Med Imaging 19(3):203–210
Niemeijer M, Staal JJ, VanGinneken B, Loog M, Abramoff MD (2004) Comparative study of retinal vessel segmentation methods on a new publicly available database, SPIEMed. Imaging 5370:648–656
Sopharak A, Uyyanonvara B, Barman S, Williamson TH (2008) Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Comput Med Imaging Graph 32(8):720–727
Doi K (2007) Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput Med Imaging Graph 31(4–5):198–211
Zana F, Klein JC (2001) Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. IEEE Trans Image Process 10(7):1010–1019
Matsopoulos GK, Asvestas PA, Delibasis KK, Mouravliansky NA, Zeyen TG (2008) Detection of glaucomatous change based on vessel shape analysis. Comput Med Imaging Graph 32(3):183–192
Lin T, Zheng Y (2003) Node-matching-based pattern recognition method for retinal blood vessel images. Opt Eng 42(11):3302–3306
Jiang X, Mojon D (2003) Adaptive local thresholding by verification based multi threshold probing with application to vessel detection in retinal images. IEEE Trans Pattern Anal Mach Intell 25(1):131–137
Zhang B, Zhang L, Zhang L, Karray F (2010) Retinal vessel extraction by matched filter with first-order derivative of Gaussian. Comput Biol Med 40:438–445
Narasimha-Iyer H, Mahadevan V, Beach JM, Roysam B (2008) Improved detection of the central reflex in retinal vessels using a generalized dual-Gaussian model and robust hypothesis testing. IEEE Trans Inf Technol Biomed 12(3):406–410
Zhou L, Rzeszotarsk MS, Singerman LJ, Chokreff JM (1994) The detection and quantification of retinopathy using digital angiograms. IEEE Trans Med Imaging 13(4):619–626
Delibasis KK, Kechriniotis AI, Tsonos C, Assimakis N (2010) Automatic model-based tracing algorithm for vessel segmentation and diameter estimation. Comput Methods Programs Biomed. doi:10.1016/j.cmpb.2010.03.004
Adel M, Moussaoui A, Rasigni M, Bourennane S, Hamami L (2010) Statistica-based tracking technique for linear structures detection: application to vessel segmentation in medical images. IEEE Signal Process Lett 17(6):555–558
Vlachos M, Dermatas E (2010) Multi-scale retinal vessel segmentation using line tracking. Comput Med Imaging Graph 34(3):213–227
Perfetti R, Ricci E, Casali D, Costantini G (2007) Cellular neural networks with virtual template expansion for retinal vessel segmentation. IEEE Trans Circuits Syst II 54:141–145
Soares JVB, Leandro JJG, CesarJr RM, Jelinek HF, Cree MJ (2006) Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification. IEEE Trans Med Imaging 25:1214–1222
Staal JJ, Abramoff MD, Niemeijer M, Viergever MA, VanGinneken B (2004) Ridge based vessel segmentation in color images of the retina. IEEE Trans Med Imaging 23(4):501–509
Supot S, Thanapong C, Chuchart P, Manas S (2007) Automatic segmentation of blood vessels in retinal images based on Fuzzy K-Median clustering, in: Proceedings of the IEEE International Conference on Integration Technology. Shenzhen, China, pp 584–588
Garg S, Sivaswamy J, Chandra S (2007) Unsupervised curvature-based retinal vessel segmentation. In: Proceedings of the IEEE international symposium on bio-medical imaging pp 344–347
Mendonca AM, Campilho A (2006) Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE Trans Med Imaging 25(9):1200–1213
Palomera-Perez MA, Martinez-Perez ME, Benitez-Perez H, Ortega-Arjona JL (2010) Parallel Multiscale feature extraction and region growing: application in retinal blood vessel detection. IEEE Trans Inf Technol Biomed 14(2):500–506
Martinez-Perez ME, Hughes AD, Thom SA, Bharath AA, Parker KH (2007) Segmentation of blood vessels from red-free and fluorescein retinal images. Med Image Anal 11(1):47–61
Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353
Jang JSR, Sun CT, Mizutani E (1997) Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence. Upper Saddle River, Prentice Hall
Jang JSR (1993) ANFIS: adaptive-network-based fuzzy inference systems. IEEE Trans Syst Man Cybern 23(3):665–685
Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on feature distribution. Pattern Recogn 29:51–59
Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041
Nanni L, Lumini A (2008) Local binary patterns for a hybrid fingerprint matcher. Pattern Recogn 41:3461–3466
Lucieer A, Stein A, Fisher P (2005) Multivariate texture-based segmentation of remotely sensed imagery for extraction of objects and their uncertainty. Int J Remote Sens 26(14):2917–2936
Fathi A, Naghsh-Nilchi AR (2012) Noise tolerant local binary pattern operator for efficient texture analysis. Pattern Recognit Lett 33:1093–1100
Otsu N (1979) A threshold selection method from Gray-Level histograms. IEEE Trans Syst Man Cybern 9(1):62–66
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fathi, A., Naghsh-Nilchi, A.R. Integrating adaptive neuro-fuzzy inference system and local binary pattern operator for robust retinal blood vessels segmentation. Neural Comput & Applic 22 (Suppl 1), 163–174 (2013). https://doi.org/10.1007/s00521-012-1118-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-1118-8