A visual targeting system for the microinjection of unstained adherent cells

https://doi.org/10.1016/j.compbiomed.2012.11.015Get rights and content

Abstract

Automatic localization and targeting are critical steps in automating the process of microinjecting adherent cells. This process is currently performed manually by highly trained operators and is characterized as a laborious task with low success rate. Therefore, automation is desired to increase the efficiency and consistency of the operations. This research offers a contribution to this procedure through the development of a vision system for a robotic microinjection setup. Its goals are to automatically locate adherent cells in a culture dish and target them for a microinjection. Here the major concern was the achievement of an error-free targeting system to guarantee high consistency in microinjection experiments. To accomplish this, a novel visual targeting algorithm integrating different image processing techniques was proposed. This framework employed defocusing microscopy to highlight cell features and improve cell segmentation and targeting reliability. Three main image processing techniques, operating at three different focus levels in a bright field (BF) microscope, were used: an anisotropic contour completion (ACC) method, a local intensity variation background-foreground classifier, and a grayscale threshold-based segmentation. The proposed framework combined information gathered by each of these methods using a validation map and this was shown to provide reliable cell targeting results. Experiments conducted with sets of real images from two different cell lines (CHO-K1 and HEK), which contained a total of more than 650 cells, yielded flawless targeting results along with a cell detection ratio greater than 50%.

Introduction

Progress in biotechnology has been increasing the demand for genetically modified organisms [1], with manual microinjection forming one of the most widely used techniques to deliver foreign DNA or other proteins into cells. These operations are typically characterized by high skill requirements, long training periods (1–2 years) and comparatively low success rates (40%–70%). They are also considered tedious and time consuming [2], [3], [4]. Moreover, humans lack consistency over repeated operations, so the automation of cell microinjections is a growing field of interest with a wide range of applications [1], [2], [3], [4]. In this context automatic cell segmentation is essential to enable the realization of automated systems that can increase the efficiency and consistency of the operations while reducing costs.

Historically, different techniques have been used to ease the visual identification of cells under the microscope. Most of these rely on the use of chemical dyes [5], [6], [7]. Regardless of the results achieved with staining techniques, side effects of chemical dyes on the cells' life expectancy are still under debate. Several works report on the toxicity of the dyes describing side effects observed, for example, in the endoplasmatic reticulum [8] and cytoskeleton [9]. Because of this potential harm highly diluted dye solutions have been used for a long time, impairing the quality of the staining process [10]. Moreover, staining may affect cell membrane permeability or dye uptake may differ between cell types in the case of multiline cell cultures [11].

In contrast, the new vision algorithm presented here is designed to detect unstained cells using bright field (BF) microscopy. This chemical dye free technique avoids the controversial side effects mentioned above and is based on a widely used imaging mode for cells observation and microinjection.

The research presented here is mainly focused on Chinese hamster ovarian cells (CHO-K1) imaged using 200× optical magnification and 1024×768 pixels resolution. These cells are widely used in biological and medical research, and also commercially in the production of therapeutic proteins [12]. They are mostly transparent, adherent to the Petri dish and present a range of different shapes and sizes with dimensions varying from 10 to 50 μm (see Fig. 1). In addition, human embryonic kidney (HEK) cells were also used in this research as part of the experimental validation of the reported visual targeting framework.

Literature in the field of unstained cell targeting shows that pioneering image processing algorithms for cell segmentation were often based on gray level thresholding [13], however, these methods fail to single out individual cells in clusters. Nevertheless, most state of the art methods are still threshold-based, especially when the aim is a coarse preliminary selection of the targets [14], [15]. Recently, Tscherepanow et al. [15] applied constrained active contours (snakes) for surface detection of plated ovarian cells achieving a recognition rate up to 90%. However, snakes are prone to the effects of local minima during the energy function minimization process, which may trap the snakes in wrong models [16].

A completely different approach focuses on the cell membrane in both 2D and multilayered 3D representation. Some cells show very distinctive outer contours that can be easily detected by edge detection and ridge enhancement algorithms. Examples that exploit this characteristic include the work of Adiga et al. [18], in which partial derivative equations (PDE) based edge enhancing diffusion [19], [20] was used to achieve better smoothing along the edges than across regions, resulting in up to 95% correct nuclei segmentation and false positive rates of around 10%. In this case the diffusion process assisted an anisotropic contour completion (ACC) process [17] that has demonstrated good results when applied to CHO-K1 cells [21], [22].

Another useful approach to cell detection is based on supervised learning schemes. Given a training set of 496 cells, Long et al. [23] used a support vector machine to count viable cells in BF microscope images and compared this count with the results obtained from a neural network [24], [25]. Both methods resulted in a very high success rate (up to 97%). This is one of the best performances yet reported, but the method is limited to cells that are visible and show features that can be compared to the template image worked out in the training set.

Ali et al. [16] used defocused BF microscope images to obtain a phase-based segmentation of unstained cells. They started with a highly defocused image in which the cell was visible as a strong dark smear and proceeded by searching its outer contours as focus was improved. As the presence of visual high frequency elements increased with the improving focus, they used local phase coherence to refine the cell contour, thus refining the localization of its boundaries. The result was a cell detection rate consistently above 87% with false positive rates down to 5%. Unfortunately, despite these good results, this method was only developed to segment isolated cells and is not directly applicable to the cells of interest here since CHO and HEK cells grow in tight clusters.

To achieve reliable CHO and HEK cells targeting, this work proposes the use of defocusing as a means to improve cell contrast and cell localization performance. Different focus levels highlight different cell features, hence the developed framework implements distinct image processing algorithms to best exploit the information provided at different focus levels (Fig. 2). Increased consistency and reliability is achieved by combining the output of these algorithms through validation maps. Supporting this theory is the fact that defocusing BF microscopy has been recently considered in several works on automatic cell spotting due to it being simple, affordable and suitable for an objective comparison between different frameworks. Most of these works rely on the use of several focus levels as in Selinummi et al. where the authors collected a z-stack of each frame displayed to generate a 2D projection image [26] or in Ali et al. where the information coming from in-focus and out-of-focus images is combined to spot adherent cells' nuclei [27]. Watershed-based cell spotting techniques have also been tried over defocused images with some success [28]. In addition, when the microscope focus cannot be directly set, other methods employ progressive coarsening over the in-focus images to simulate a blurring effect [29].

Differently from most of the literature, the main goal of our framework was the achievement of an error-free targeting system, which is important to guarantee high consistency in automatic microinjection procedures. This means defining only one target per cell. It also means that maximizing cell identification rate was a secondary goal defined to target a large percentage of the cells in the field of view while minimizing waste of injection material.

In the following sections this new processing framework is thoroughly described, starting with a review of defocusing theory and of the main image processing algorithms used at each focus level. The fusion of information for enhanced cell targeting results is subsequently presented, followed by evaluation and validation experiments conducted with the two adherent cell lines mentioned above. Lastly, conclusions and future applications are presented.

Section snippets

Methods

The developed visual targeting framework combines three different image processing techniques in a unique manner to reliably detect viable cells for injection by means of defocusing microscopy. Differently from most previous works that employ multilevel defocusing microscopy [26], [27], we propose a method to grab only three images of each frame of the cell culture. This set of images, which are taken at three different focus levels, are representative of the three image stages that are

Novel framework

Previous studies [21], [22] and further experiments performed during the course of this research (presented in Section 4) show that ACC performs well on our cells of interest, achieving hit ratios over 70% and error rates lower than 10%. However, it still falls short in terms of the goal set for the system, i.e. to achieve error-free cells targeting with the definition of a single target per cell and zero off-cell targets.

In order to achieve this system goal over CHO and HEK cells, a new

Evaluation methods

Evaluation of the new framework for automatic cell targeting was performed through a series of experiments aimed at assessing the quality of each image processing method described above. To accomplish this, ground-truth solutions were created through manual segmentation of microscope images to defined suitable regions for microinjection within each image as shown in Fig. 8.

The evaluation set created consisted of nine 1024×768 pixels images covering a total of 502 CHO-K1 cells, plus six HEK

Algorithm evaluation and tuning

This section describes experiments performed to determine the optimal parameters for each image processing algorithm introduced in Section 2. Here the optimal parameters were considered to be those that yielded points closer to TPR=100% and FPR=0%, both in cell pixel detection and target definition. This was performed to obtain a processing framework most likely to accomplish the goal of zero errors with high cell detection rate. Nonetheless, during this process the impact of each of these

Results

A summary of the targeting test results obtained with the CHO-K1 cells is presented in Table 1. These results were obtained by averaging the TPR and FPR over the 3 test sets that made up the 3-fold cross validation process. Data in the table demonstrates that the validation map was able to eliminate FP errors, bringing the number of off-cell targets to zero and minimizing the number of multiple targeted cells. Furthermore, the side effect of this operation was minimal as there was only a small

Conclusions and future work

This paper describes the novel image processing framework designed for the full automation of a biomanipulation system. The goal here was the creation of a reliable visual recognition algorithm capable of identifying and defining microinjection targets on unstained adherent cells on a petri dish. The challenge faced was the localization of these essentially transparent cells and this was solved through a fusion of three image processing algorithms and defocusing methods. An anisotropic contour

Conflict of interest statement

We certify that there is no conflict of interest with any financial/academic organization regarding the material discussed in the manuscript.

Gabriele Becattini received his B.S. degree in Biomedical Engineering and his M.S. degree in neuro-engineering from Università di Genova in 2006 and 2008 respectively. He is currently a Ph.D. candidate in the Advanced Robotics Department of the Italian Institute of Technology. His research interests are in the area of computer vision, microscopy, control and automation.

References (39)

  • D. Gil et al.

    Extending anisotropic operators to recover smooth shapes

    Comput. Vis. Image Understand.

    (2005)
  • X. Long et al.

    Automatic detection of unstained viable cells in bright field images using a support vector machine with an improved training procedure

    Comput. Biol. Med.

    (2006)
  • T. Yeo et al.

    Autofocusing for tissue microscopy

    Image Vis. Comput.

    (1993)
  • A. Carpenter et al.

    Systematic genome-wide screens of gene function

    Nat. Rev. Genet.

    (2004)
  • Y. Zhang, K.K. Tan, S. Huang, Software Based Vision System for Automated Cell Injection in: Proceedings of the...
  • F. Arai, K. Morishima, T. Kasugai, T. Fukuda, Bio-micro-manipulation (new direction for operation improvement), in:...
  • L. Mattos et al.

    Blastocyst microinjection automation

    IEEE Trans. Inf. Technol. Biomed.

    (2009)
  • E. Hodneland et al.

    A unified framework for automated 3-d segmentation of surface-stained living cells and a comprehensive segmentation evaluation

    IEEE Trans. Med. Imaging

    (2009)
  • U. Agero et al.

    Cell surface fluctuations studied with defocusing microscopy

    Phys. Rev. E

    (2003)
  • C.D. Solorzano et al.

    Segmentation of nuclei and cells using membrane related protein markers

    J. Microsc.

    (2001)
  • M. Terasaki

    Fluorescent labeling of endoplasmic reticulum

    Methods Cell Biol.

    (1989)
  • V.V. Lulevich et al.

    Cell tracing dyes significantly change single cell mechanics

    J. Phys. Chem. B

    (2009)
  • B.S. Mookerjee, M. Choudhury, B. Ganguly, Toxicating effect of Janus green, Naturwissenschaften, 52(1),...
  • I. Schmid, C. Uittenbogaart B.D. Jamieson, Live-cell assay for detection of apoptosis by dual-laser flow cytometry...
  • K.P. Jayapal et al.

    Recombinant protein therapeutics from CHO cells — 20 years and counting

    Chem. Eng. Prog.

    (2007)
  • K. Wu et al.

    Live cell image segmentation

    IEEE Trans. Biomed. Eng.

    (1995)
  • M. Tscherepanow et al.

    Automatic segmentation of unstained living cells in bright-field microscope images

    Lect. Notes Comput. Sci.

    (2008)
  • M. Tscherepanow, F. Zollner, F. Kummert, Classification of segmented regions in brightfield microscope images, in:...
  • R. Ali, M. Gooding, M. Christlieb, M. Brady, Phase-based segmentation of cells from brightfield microscopy, in:...
  • Cited by (0)

    Gabriele Becattini received his B.S. degree in Biomedical Engineering and his M.S. degree in neuro-engineering from Università di Genova in 2006 and 2008 respectively. He is currently a Ph.D. candidate in the Advanced Robotics Department of the Italian Institute of Technology. His research interests are in the area of computer vision, microscopy, control and automation.

    Leonardo S. Mattos (B.Sc. 1998, M.Sc. 2003, Ph.D. 2007) is a Team Leader at the Italian Institute of Technology (IIT) in Genoa. His research background includes micromanipulation, systems integration, development of user interfaces and systems for safe and efficient teleoperation, robotic surgery, computer vision, adaptive controllers and automation. Leonardo received his Ph.D. degree in electrical engineering from the North Carolina State University (NCSU, USA), where he worked as research assistant at the Center for Robotics and Intelligent Machines (CRIM) from 2002 until 2007. Leonardo has been a research at the IIT's Advanced Robotics Department since 2007. He is the PI and coordinator of the EC funded project μRALP.

    Darwin G. Caldwell (B.Sc. 1986, M.Sc. 1990, Ph.D. 1996) is Director of the Advanced Robotics Department at the Italian Institute of Technology and a Visiting/Honorary Professor at the Universities of Sheffield, Manchester, Wales (Bangor) and King's College London. His research background includes innovative actuators and sensors, haptic feedback, force augmentation exoskeletons, dexterous manipulators, humanoid robotics, bipedal and quadrupedal robots (iCub), biomimetic systems, rehabilitation robotics, micro-robotics, telepresence and teleoperation procedures, medical robotics, and automation systems for the food industry. He is involved in several European projects including VIACTORS, OCTOPUS, AMARSI, μRALP, STIFF-FLOP, SAFARI and AUTORECON.

    View full text