Interactive-cut: Real-time feedback segmentation for translational research

https://doi.org/10.1016/j.compmedimag.2014.01.006Get rights and content

Abstract

In this contribution, a scale-invariant image segmentation algorithm is introduced that “wraps” the algorithm's parameters for the user by its interactive behavior, avoiding the definition of “arbitrary” numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach.

Introduction

Segmentation of digital imagery in general is a labeling problem in which the goal is to assign to each pixel in an input image a unique label that represents an object. In doing so, the input image can have an arbitrary dimension, like 1D, 2D or 3D, and the pixel values can be in color- or gray-level. An example for an object in digital imaging would be person in a video and an example for an object in medical imaging would be an anatomical structure in a patient scan. Finally, these labeled images are referred to as the “segmentation” of the input image or the “segmented” image [1]. In Computer Science several types of segmentation algorithms exist, like Active Contours [2], [3], Active Appearance Models [4], graph-based approaches [5], fuzzy-based approaches [6], or neural network approaches [7]. However, in the medical field automatic segmentation methods are typically only suitable for a specific type of pathology in a specific imaging modality and still fail time-by-time, and moreover, most automatic approaches need precise parameter settings to provide good results. As a consequence, the state of the art or rather clinical practice in medical departments is still manual slice-by-slice segmentations which are very time consuming. Thus, interactive segmentation approaches like [8], [9], [10], [11], [12] get more and more popular, because they allow the user to support the algorithm with more information, especially in difficult segmentation tasks. However, in this contribution we introduce an interactive graph-based approach with a specific design of the graph which requires only one user-defined seed point inside an object for the segmentation process. The algorithm is therefore eligible for real-time segmentation by means of giving the user real-time feedback of the segmentation result. In addition, the specific graph construction enables to perform the mincut within a second on modern machines and the color value information that are needed for the approach can be automatically extracted around the user-defined seed point. For evaluation the focus of this contribution is on medical data and for the proof of concept, the presented scheme has been evaluated with fixed seed points mainly on medical image data in 2D and 3D, like brain tumors, cerebral aneurysms and vertebral bodies. However, the segmentation approach can also be applied on arbitrary image data.

The paper is organized as follows. Section 2 presents the details of the proposed algorithm. Section 3 discusses the results of our experiments. Section 4 concludes the paper and outlines areas for future research.

Section snippets

Methods

Our interactive segmentation algorithm works in 2D and 3D and starts by setting up a directed graph from a user-defined seed point that is located inside the object to be segmented [13]. Therefore, points are sampled along rays cast through a contour (2D) or surface (3D) of an object template to create the graph. These sampled points are the nodes n  V of the graph G(V, E). In addition, e  E is the corresponding set of edges which consists of edges between the nodes and edges that connect the

Results

The interactive segmentation algorithms have been implemented within the medical prototyping platform MeVisLab (http://www.mevislab.de), whereby the algorithms have been implemented in C++ as additional MeVisLab-modules (note: although the foci of the prototyping platform MeVisLab are medical applications, it is also possible to process images from other fields). The special graph construction of the algorithm makes it eligible for real-time segmentation, because it only considers subsets of

Discussion

In this contribution, a novel interactive image segmentation algorithm has been presented that provides the user with real-time feedback of the segmentation result during the segmentation process. Therefore, a specific graph-based segmentation scheme has been elaborated that needs only one user-defined seed point inside the object that has to be segmented. In contrast to other approaches, where a more intensive initialization is needed a single user-defined seed point makes the algorithm

Author contribution statement

Conceived and designed the experiments: JE, TL, RS. Performed the experiments: JE. TL, RS. Analyzed the data: JE, TL, RS. Contributed reagents/materials/analysis tools: JE, TL, RS, BF, CN. Wrote the paper: JE.

Conflict of interest statement

The authors of this paper have no potential conflict of interests.

Acknowledgements

First of all, the authors would like to thank the physicians Neha Agrawal, M.B.B.S., Dr. med. Barbara Carl, Thomas Dukatz, Christoph Kappus, Dr. med. Malgorzata Kolodziej and Dr. med. Daniela Kuhnt for performing the manual slice-by-slice segmentations of the medical images and therefore providing the 2D and 3D masks for the evaluation. Furthermore, the authors are thanking Drs. Fedorov, Tuncali, Fennessy and Tempany, for sharing the prostate data collection. Finally, the authors would like to

Jan Egger is currently a Senior Researcher at the Institute for Computer Graphics and Vision of the Graz University of Technology in Austria. He received his German pre-diploma and diploma degree in Computer Science from the University of Wiesbaden, Germany in 2001 and 2004, respectively, his Master's degree in Computer Science from the University of Applied Sciences, Darmstadt, Germany in 2006, his first Ph.D. in Computer Science from the University of Marburg, Germany, in 2009, and his second

References (45)

  • V. Vezhnevets et al.

    GrowCut-interactive multi-label N-D image segmentation

  • F. Heckel et al.

    Sketch-based editing tools for tumour segmentation in 3D medical images

    Computer Graphics Forum

    (2013)
  • D. Barbosa et al.

    Real-time 3D interactive segmentation of echocardiographic data through user-based deformation of B-spline explicit active surfaces

    Computerized Medical Imaging and Graphics

    (2013)
  • J. Egger et al.

    Template-cut: a pattern-based segmentation paradigm

    Scientific Reports, Nature Publishing Group (NPG)

    (2012)
  • K. Li et al.

    Optimal surface segmentation in volumetric images – a graph-theoretic approach

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2006)
  • Y. Boykov et al.

    An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2004)
  • M.H.A. Bauer et al.

    A fast and robust graph-based approach for boundary estimation of fiber bundles relying on fractional anisotropy maps

  • J. Egger et al.

    Graph-based tracking method for aortic thrombus segmentation

  • J. Egger et al.

    Aorta segmentation for stent simulation

  • J. Egger et al.

    Nugget-cut: a segmentation scheme for spherically- and elliptically-shaped 3D objects

  • J. Egger

    PCG-cut: graph driven segmentation of the prostate central gland

    PLoS ONE

    (2013)
  • J. Egger et al.

    Square-cut: a segmentation algorithm on the basis of a rectangle shape

    PLoS ONE

    (2012)
  • Cited by (0)

    Jan Egger is currently a Senior Researcher at the Institute for Computer Graphics and Vision of the Graz University of Technology in Austria. He received his German pre-diploma and diploma degree in Computer Science from the University of Wiesbaden, Germany in 2001 and 2004, respectively, his Master's degree in Computer Science from the University of Applied Sciences, Darmstadt, Germany in 2006, his first Ph.D. in Computer Science from the University of Marburg, Germany, in 2009, and his second interdisciplinary Ph.D. in Human Biology from the University Hospital of Marburg, Germany, in 2012. His research interests are Medical Image Analysis and Computer Vision, and Image-Guided Therapy, and he is currently working towards his German Habilitation in Computer Science.

    Tobias Lüddemann received his diploma in Mechanical Engineering with majors in Medical Engineering and Information Technology from the Technical University of Munich, Germany, in 2013. His current research interests include biomedical image processing and medical device networks.

    Robert Schwarzenberg received his Bachelor's degree in computer science from the University of Marburg, Germany, in 2012. Currently he is studying towards a dual degree in chemistry and English linguistics and literature.

    Bernd Freisleben is a full professor of computer science in the Department of Mathematics and Computer Science at the University of Marburg, Germany. He received his Master's degree in computer science from the Pennsylvania State University, USA, in 1981, and his Ph.D. degree in computer science from the Darmstadt University of Technology, Germany, in 1985. His research interests include computational intelligence, scientific computing, multimedia computing, and medical image processing.

    Christopher Nimsky is a professor and chairman of the Department of Neurosurgery at the University of Marburg, Germany. After medical school at the University Heidelberg he received his neurosurgical training at the Department of Neurosurgery at the University Erlangen-Nuremberg and became staff member in 1999. He became associate professor in 2001 after finishing his Ph.D. thesis on “intraoperative magnetic resonance imaging” and was vice chairman from 2005 to 2008 at the Department of Neurosurgery in Erlangen. His research focus is on medical technologies in neurosurgery, like intraoperative imaging and multimodal navigation, as well as molecular biology in neurooncology.

    1

    Joint first authorship.

    2

    Joint senior authorship.

    View full text