Interactive-cut: Real-time feedback segmentation for translational research
Introduction
Segmentation of digital imagery in general is a labeling problem in which the goal is to assign to each pixel in an input image a unique label that represents an object. In doing so, the input image can have an arbitrary dimension, like 1D, 2D or 3D, and the pixel values can be in color- or gray-level. An example for an object in digital imaging would be person in a video and an example for an object in medical imaging would be an anatomical structure in a patient scan. Finally, these labeled images are referred to as the “segmentation” of the input image or the “segmented” image [1]. In Computer Science several types of segmentation algorithms exist, like Active Contours [2], [3], Active Appearance Models [4], graph-based approaches [5], fuzzy-based approaches [6], or neural network approaches [7]. However, in the medical field automatic segmentation methods are typically only suitable for a specific type of pathology in a specific imaging modality and still fail time-by-time, and moreover, most automatic approaches need precise parameter settings to provide good results. As a consequence, the state of the art or rather clinical practice in medical departments is still manual slice-by-slice segmentations which are very time consuming. Thus, interactive segmentation approaches like [8], [9], [10], [11], [12] get more and more popular, because they allow the user to support the algorithm with more information, especially in difficult segmentation tasks. However, in this contribution we introduce an interactive graph-based approach with a specific design of the graph which requires only one user-defined seed point inside an object for the segmentation process. The algorithm is therefore eligible for real-time segmentation by means of giving the user real-time feedback of the segmentation result. In addition, the specific graph construction enables to perform the mincut within a second on modern machines and the color value information that are needed for the approach can be automatically extracted around the user-defined seed point. For evaluation the focus of this contribution is on medical data and for the proof of concept, the presented scheme has been evaluated with fixed seed points mainly on medical image data in 2D and 3D, like brain tumors, cerebral aneurysms and vertebral bodies. However, the segmentation approach can also be applied on arbitrary image data.
The paper is organized as follows. Section 2 presents the details of the proposed algorithm. Section 3 discusses the results of our experiments. Section 4 concludes the paper and outlines areas for future research.
Section snippets
Methods
Our interactive segmentation algorithm works in 2D and 3D and starts by setting up a directed graph from a user-defined seed point that is located inside the object to be segmented [13]. Therefore, points are sampled along rays cast through a contour (2D) or surface (3D) of an object template to create the graph. These sampled points are the nodes n ∈ V of the graph G(V, E). In addition, e ∈ E is the corresponding set of edges which consists of edges between the nodes and edges that connect the
Results
The interactive segmentation algorithms have been implemented within the medical prototyping platform MeVisLab (http://www.mevislab.de), whereby the algorithms have been implemented in C++ as additional MeVisLab-modules (note: although the foci of the prototyping platform MeVisLab are medical applications, it is also possible to process images from other fields). The special graph construction of the algorithm makes it eligible for real-time segmentation, because it only considers subsets of
Discussion
In this contribution, a novel interactive image segmentation algorithm has been presented that provides the user with real-time feedback of the segmentation result during the segmentation process. Therefore, a specific graph-based segmentation scheme has been elaborated that needs only one user-defined seed point inside the object that has to be segmented. In contrast to other approaches, where a more intensive initialization is needed a single user-defined seed point makes the algorithm
Author contribution statement
Conceived and designed the experiments: JE, TL, RS. Performed the experiments: JE. TL, RS. Analyzed the data: JE, TL, RS. Contributed reagents/materials/analysis tools: JE, TL, RS, BF, CN. Wrote the paper: JE.
Conflict of interest statement
The authors of this paper have no potential conflict of interests.
Acknowledgements
First of all, the authors would like to thank the physicians Neha Agrawal, M.B.B.S., Dr. med. Barbara Carl, Thomas Dukatz, Christoph Kappus, Dr. med. Malgorzata Kolodziej and Dr. med. Daniela Kuhnt for performing the manual slice-by-slice segmentations of the medical images and therefore providing the 2D and 3D masks for the evaluation. Furthermore, the authors are thanking Drs. Fedorov, Tuncali, Fennessy and Tempany, for sharing the prostate data collection. Finally, the authors would like to
Jan Egger is currently a Senior Researcher at the Institute for Computer Graphics and Vision of the Graz University of Technology in Austria. He received his German pre-diploma and diploma degree in Computer Science from the University of Wiesbaden, Germany in 2001 and 2004, respectively, his Master's degree in Computer Science from the University of Applied Sciences, Darmstadt, Germany in 2006, his first Ph.D. in Computer Science from the University of Marburg, Germany, in 2009, and his second
References (45)
- et al.
Deformable models in medical image analysis: a survey
Medical Image Analysis
(1996) - et al.
A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image
Computerized Medical Imaging and Graphics
(2011) - et al.
3-T MR-guided brachytherapy for gynecologic malignancies
Magnetic Resonance Imaging
(2012) - et al.
Registration and segmentation for image guided therapy
- et al.
Snakes – active contour models
International Journal of Computer Vision
(1987) - et al.
Active appearance models
- et al.
Normalized cuts and image segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
(2000) - et al.
Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging
Computerized Medical Imaging and Graphics
(2009) - et al.
Interactive graph cuts for optimal boundary and region segmentation of objects in N–D images
IEEE International Conference on Computer Vision (ICCV)
(2001) - et al.
FIST: Fast interactive segmentation of tumors
Abdominal Imaging
(2011)
GrowCut-interactive multi-label N-D image segmentation
Sketch-based editing tools for tumour segmentation in 3D medical images
Computer Graphics Forum
Real-time 3D interactive segmentation of echocardiographic data through user-based deformation of B-spline explicit active surfaces
Computerized Medical Imaging and Graphics
Template-cut: a pattern-based segmentation paradigm
Scientific Reports, Nature Publishing Group (NPG)
Optimal surface segmentation in volumetric images – a graph-theoretic approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
A fast and robust graph-based approach for boundary estimation of fiber bundles relying on fractional anisotropy maps
Graph-based tracking method for aortic thrombus segmentation
Aorta segmentation for stent simulation
Nugget-cut: a segmentation scheme for spherically- and elliptically-shaped 3D objects
PCG-cut: graph driven segmentation of the prostate central gland
PLoS ONE
Square-cut: a segmentation algorithm on the basis of a rectangle shape
PLoS ONE
Cited by (0)
Jan Egger is currently a Senior Researcher at the Institute for Computer Graphics and Vision of the Graz University of Technology in Austria. He received his German pre-diploma and diploma degree in Computer Science from the University of Wiesbaden, Germany in 2001 and 2004, respectively, his Master's degree in Computer Science from the University of Applied Sciences, Darmstadt, Germany in 2006, his first Ph.D. in Computer Science from the University of Marburg, Germany, in 2009, and his second interdisciplinary Ph.D. in Human Biology from the University Hospital of Marburg, Germany, in 2012. His research interests are Medical Image Analysis and Computer Vision, and Image-Guided Therapy, and he is currently working towards his German Habilitation in Computer Science.
Tobias Lüddemann received his diploma in Mechanical Engineering with majors in Medical Engineering and Information Technology from the Technical University of Munich, Germany, in 2013. His current research interests include biomedical image processing and medical device networks.
Robert Schwarzenberg received his Bachelor's degree in computer science from the University of Marburg, Germany, in 2012. Currently he is studying towards a dual degree in chemistry and English linguistics and literature.
Bernd Freisleben is a full professor of computer science in the Department of Mathematics and Computer Science at the University of Marburg, Germany. He received his Master's degree in computer science from the Pennsylvania State University, USA, in 1981, and his Ph.D. degree in computer science from the Darmstadt University of Technology, Germany, in 1985. His research interests include computational intelligence, scientific computing, multimedia computing, and medical image processing.
Christopher Nimsky is a professor and chairman of the Department of Neurosurgery at the University of Marburg, Germany. After medical school at the University Heidelberg he received his neurosurgical training at the Department of Neurosurgery at the University Erlangen-Nuremberg and became staff member in 1999. He became associate professor in 2001 after finishing his Ph.D. thesis on “intraoperative magnetic resonance imaging” and was vice chairman from 2005 to 2008 at the Department of Neurosurgery in Erlangen. His research focus is on medical technologies in neurosurgery, like intraoperative imaging and multimodal navigation, as well as molecular biology in neurooncology.