Dynamic edge tracing: Boundary identification in medical images

https://doi.org/10.1016/j.cviu.2009.07.003Get rights and content

Abstract

Medical image segmentation is a sufficiently complex problem that no single strategy has proven to be completely effective. Historically, region growing, clustering, and edge tracing have been used and while significant steps have been made in the first two, research into automatic, recursive, boundary following has not kept pace. A new, advanced, edge-tracing algorithm capable of combining edge, region, and pixel-classification information, and suitable for magnetic resonance image analysis, is described. The algorithm is inspired by automatic target tracking, as used in civilian and military aerospace operations. Comparison with clustering and level sets is performed. Results indicate that no method is uniformly superior, that the new algorithm provides information not available from the other approaches, and that it can utilize a variety of sources including results from other methods. The algorithm is applied to two-dimensional slice images and extension to three-dimensional images is discussed.

Introduction

In medical images, the identification of object boundaries or regions of interest can provide valuable information for diagnosis and treatment of disease. Such segmentation operations can be performed manually but are very time consuming and subject to inter- and intra-operator variability so that automatic methods are preferable [1].

Segmentation of medical images involves three main problems. The images contain noise that can alter the intensity of a pixel such that its classification becomes uncertain. The images contain intensity nonuniformity where the average intensity level of a single tissue class varies over the extent of the image. And, the images have finite pixel size and thus are subject to partial volume averaging where individual pixels contain a mixture of tissue classes and the intensity of a pixel may not be consistent with any one class.

Automatic image segmentation is an area of active research that has produced a variety of methods. Historically, these could be divided into three main approaches: region-based methods, pixel classification, and edge detection, including edge tracing [2]. The same imprint can be seen today especially when the methods of active surfaces [3], [4] are viewed as advances in region growing [5]. Likewise, advances in pixel classification have produced sophisticated clustering algorithms [3], [6] but similar advances in automatic edge tracing have not been so apparent [7].

We distinguish automatic and semi-automatic edge tracing by the level of involvement required by the operator. Semi-automatic methods, for example those based on graph searching [8], [9], require continuous input from the operator during the tracing process. Automatic edge-tracing methods require operator initialization, which may include the identification of a seed point, but subsequent operator involvement is not required until after the tracing operation is complete.

The main criticism that has been directed toward automatic edge tracing as an approach to image segmentation is that of poor robustness. However, the identification of a coherent boundary by linking neighboring edge points provides useful information for the purpose of segmentation and is information not obtained by other methods.

The earliest edge-tracing algorithms required 8-neighbor connectivity for successive edge pixels and propagation from one pixel to the next was often performed solely on the basis of local gradient information [8]. These methods were very sensitive to noise and intensity nonuniformity that can cause discontinuities in the object boundary or induce deviation from the boundary during the tracing process. Efforts have been made to improve robustness by providing a means for crossing discontinuities and by incorporating local intensity information for better edge selectivity [10], [11], [12], [13]. However, local image information alone does not, in general, permit the production of high-quality boundaries.

Algorithms that integrate local and global image information have also appeared. A Hopfield neural network was used to combine information from spatially-separated edge segments to form contours in a two-dimensional (2D) color image [14]. This was tested on a limited dataset so it is not clear how performance would vary in cases of significant intensity noise or of irregularly-shaped contours. Multiresolution pyramids were used to connect discontinuities in traced edges for extraction of the inner and outer skull contours and the skin contour in medical head images [15] but a binary intensity threshold was used which can cause problems in cases of intensity nonuniformity. A global saliency relation was developed by modeling the paths of hypothetical, constant-speed particles undergoing Brownian motion to link edge segments into object boundaries [16]; however, this method has been explicitly directed toward images with very smooth object contours. Size constraints and a centrally located seed pixel were used to obtain boundaries from ultrasound and computed tomography (CT) images [17], although certain contour convexity requirements also exist for this method.

As with [17] and its predecessor [18], although developed independently, we have recognized the potential for automatic target tracking algorithms to serve as advanced, edge tracers [19]. An example of the implicit use of tracking in boundary identification can be seen when tracing the boundary of an object in an image with a finger in order to demonstrate its position to another observer. The boundary is static but a time dimension and a history are introduced by manual tracing with attendant benefits in information transfer. In the automatic case, the target tracking framework is applied by first assuming that a hypothetical target (a point-sized object) has followed the boundary of an object in an image and that measurements of its position have been taken at equal time intervals. The position information is obtained by applying an edge detector to the image to identify discrete edge points. These position measurements may also contain measurement noise. The edge is traced, identifying an object boundary, by following the path of the target using a statistically-based, target tracking algorithm with a Kalman filter to identify and link consecutive edge points. Multiple edge features, including position and intensity information, can be tracked simultaneously, permitting a high level of discrimination in the tracing process without invoking complicated heuristics.

In the present study, a new algorithm is described and the three main segmentation problems are considered. Edges are traced beginning at an operator-defined starting point to focus on user-selected object boundaries. Intensity features obtained from regions on both sides of an edge can be combined with spatial edge coordinates in the tracking process. Global information in the form of pixel-classification data and prior knowledge in the form of model parameter selection are used to influence the track direction. Spatial edge coordinates are interpolated to sub-pixel resolution for smooth contours while sharp transitions are also permitted by modeling target acceleration. Additionally, the automatic boundary will not self-intersect, supporting valid topology, and a recovery mechanism is implemented where alternate paths can be investigated if the chosen path terminates without producing a closed contour. This new algorithm is referred to as DTRAC, Dynamic edge Tracing with Recovery And Classification. The objective of this study is to demonstrate its capabilities in the segmentation of magnetic resonance (MR), head images, and, in doing so, to view the potential of advanced edge tracers in medical image segmentation. Presently there is very little research into automatic, edge tracing even though boundary following is the preferred method of medical experts who are widely considered to provide the gold standard for in vivo medical image segmentation.

We compare methods from three main branches of segmentation algorithms. SNake Automated Partitioning (SNAP) [20] is a freely-available tool based on level-set methods, akin to region growing, the FMRIB Automated Segmentation Tool (FAST) [21] uses a pixel-clustering approach, and DTRAC is based on edge tracing. These are three fundamentally different approaches to segmentation. We show an example to illustrate the point that the information extracted by edge tracing is not encapsulated in either of the other two methods, and an example of how DTRAC can be used to add edge information to pixel classifications obtained from FAST. These results allow us to make the statement that automatic edge tracing has value for medical image segmentation and that further exploration is warranted. The DTRAC algorithm is presently applied to 2D slice images from a three-dimensional (3D) medical image although an extension to allow the processing of 3D images directly, may be possible.

The DTRAC algorithm is not restricted in application to MR, head images. Any image boundary where an edge exists as a contrast in image intensity is a candidate for application of DTRAC. Head images do provide challenging examples, though, as in the case of white matter (WM) boundaries which have very irregular shape.

We believe that this study is the first application of target tracking-based edge tracing to the segmentation of MR, head images. Early work on the use of target tracking for the purpose of edge tracing can be found in [22], although the algorithm described therein does not permit the formation of closed contours. Closed contours are formed in [17] but the object boundaries must be approximately convex such that line of sight to a central seed pixel is retained for all edge points. This may be suitable, as intended, for certain tasks in ultrasound image analysis but the boundaries that we wish to extract from MR, head images are generally not convex. In DTRAC, nonconvex contours are readily accommodated. In the following, Section 2 describes the DTRAC algorithm and Section 3 gives the methods used for evaluation. Results from the segmentation of MR, head images that include noise, intensity nonuniformity, and partial volume averaging are presented in Section 4. Sections 5 Discussion, 6 Conclusion provide further discussion and conclusions.

Section snippets

Dynamic edge tracing with recovery and classification (DTRAC)

Fig. 1 shows a block diagram of the major processing steps. Edge detection and edge feature extraction are performed on the input image and the results are used to drive a target tracking algorithm. Beginning at an operator-defined starting location, the tracking algorithm follows the path of a hypothetical target along an edge until the starting point is revisited (a closed boundary is formed) or until no further edge data can be assigned to the track. In the event that the track terminates

Tanimoto similarity measure

The Tanimoto similarity measure [28], TS, is defined as the ratio of the number of elements in the intersection of two sets to the number of elements in their union. For two sets, A, B,TS=η(AB)/η(AB)where η stands for the cardinality of a set. This is used to evaluate segmented regions, binary images consisting of all pixels within the boundary of interest.

Hausdorff distance

The Hausdorff distance [29] is defined as the maximum of the closest point distances between two contours. For contours, A={a1,a2,,am},B={

Results

Synthetic and real MR images were used to examine the DTRAC, FAST, and SNAP segmentation algorithms under conditions of noise, intensity nonuniformity, and partial volume averaging. Brain WM regions were selected for segmentation because of their relatively large size and their detailed, convoluted, and often low-contrast boundaries. Algorithm performance in noise and intensity nonuniformity was evaluated using synthetic images from the Montreal Neurological Institute (MNI) database, normal

Discussion

The results of the experiments involving noise, nonuniformity, and partial volume averaging demonstrate that no one algorithm is uniformly superior. The SNAP method is best at low levels of nonuniformity but its performance degrades as nonuniformity is increased, a decrease that is reasonable to expect since the object intensity distribution is defined using globally applied thresholds. The FAST algorithm is best at the higher levels of noise but unusually high numbers of pixel

Conclusion

An advanced edge-tracing algorithm based on dynamic target tracking is capable of identifying object boundaries in MR, medical images. The tracking algorithm operates in multiple dimensions, permitting the seamless use of multiple features to help discriminate object boundaries. Candidate edge points are ranked by evaluation of an objective function wherein local and global image information are combined. The algorithm is able to utilize pixel-classification results from other methods and

Acknowledgments

The synthetic MR data sets were provided by the McConnell Brain Imaging Centre of the Montreal Neurological Institute, McGill University and are available at http://www.bic.mni.mcgill.ca/brainweb/.

The real MR brain data sets and their manual segmentations were provided by the Center for Morphometric Analysis at Massachusetts General Hospital and are available at http://www.cma.mgh.harvard.edu/ibsr/.

References (44)

  • I. Pitas

    Digital Image Processing Algorithms

    (1993)
  • A. Falcao et al.

    An ultra-fast user-steered image segmentation paradigm: live wire on the fly

    IEEE Trans. Med. Imaging

    (2000)
  • M. Lineberry, Image Segmentation by Edge Tracing, in: Proc. of SPIE, The International Society for Optical Engineering,...
  • V. Rakotomalala, L. Macaire, M. Valette, P. Labalette, Y. Mouton, J. Postaire, Bidimensional Retinal Blood Vessel...
  • Y. Chen, O. Chen, Robust fully-automatic segmentation based on modified edge-following technique, in: Proc. IEEE Int....
  • V. Barrios, J. Torres, G. Montilla, L. Hernandez, N. Rangel, A. Reigosa, Cellular edge detection using a trained neural...
  • H. Iwata, T. Agui, H. Nagahashi, Boundary Detection of Color Images Using Neural Networks, in: Proc. IEEE Int. Conf. on...
  • H. Soltanian-Zadeh et al.

    A multiresolution approach for contour extraction from brain images

    Med. Phys.

    (1997)
  • S. Mahamud et al.

    Segmentation of multiple salient closed contours from real images

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2003)
  • P. Abolmaesumi et al.

    An interacting multiple model probabilistic data association filter for cavity boundary extraction from ultrasound images

    IEEE Trans. Med. Imaging

    (2004)
  • P. Abolmaesumi, M.R. Sirouspour, S.E. Salcudean, Real-Time Extraction of Carotid Artery Contours From Ultrasound...
  • D. Withey, Z. Koles, W. Pedrycz, Dynamic edge tracing for 2D image segmentation, in: Proc. 23rd Int. Conf. IEEE...
  • Cited by (14)

    • Edge detection based on Krawtchouk polynomials

      2015, Journal of Computational and Applied Mathematics
      Citation Excerpt :

      Edge detection plays a relevant role in digital image processing algorithms for many different fields of application. From computer vision applications in the industry and medical fields [1] to 3D video and image coding [2], visual object contour detection is one of the functions based on edge detection, which is very often required as part of more complex operations. For instance, image segmentation for visual object identification and recognition, definition of regions of interest within a visual scene for selective coding, inspection or attention-based processing, all require fast and efficient edge detection algorithms [3].

    • Reconstruction of dynamically changing boundary of multilayer heat conduction composite walls

      2014, Engineering Analysis with Boundary Elements
      Citation Excerpt :

      Such an inverse boundary problem belongs to a family of problems that have inherited the property of being ill-posed in the Hadamard sense [2]. In recent years, the boundary identification has already been researched extensively [3–19]. To perform inverse analyses, one needs an efficient forward solver.

    • Time-scale energy based analysis of contours of real-world shapes

      2012, Mathematics and Computers in Simulation
      Citation Excerpt :

      Slight and local deviations from the roughness of the contour may indicate a pathology at a very beginning state. Practical and real examples can be initial fissures or cracks on buildings of historical importance [27] (Cultural Heritage) or spicular masses in malignant tumors [40] (Medicine)—see Fig. 1. Contour micro-structure is then fundamental in monitoring based applications, since it often provides those warning regions that need a more careful monitoring and deeper analyzes.

    • A modified ant colony based approach to digital image edge detection

      2016, Conference Proceedings of 2015 2nd International Conference on Knowledge-Based Engineering and Innovation, KBEI 2015
    • Ghosts in Discrete Tomography

      2015, Journal of Mathematical Imaging and Vision
    View all citing articles on Scopus
    View full text