Embryo development stage prediction algorithm for automated time lapse incubators

https://doi.org/10.1016/j.cmpb.2019.05.027Get rights and content

Highlights

Abstract

Background and Objective

Time-lapse microscopy has become an important tool for studying the embryo development process. Embryologists can monitor the entire embryo growth process and thus select the best embryos for fertilization. This time and the resource consuming process are among the key factors for success of pregnancies. Tools for automated evaluation of the embryo quality and development stage prediction are developed for improving embryo selection.

Methods

We present two-classifier vote-based method for embryo image classification. Our classification algorithms have been trained with features extracted using a Convolutional Neural Network (CNN). Prediction of embryo development stage is then completed by comparing confidence of two classifiers. Images are labeled depending on which one receives a larger confidence rating.

Results

The evaluation has been done with imagery of real embryos, taken in the ESCO Time Lapse incubator from four different developing embryos. The results illustrate the most effective combination of two classifiers leading to an increase of prediction accuracy and achievement of overall 97.62% accuracy for a test set classification.

Conclusions

We have presented an approach for automated prediction of the embryo development stage for microscopy time-lapse incubator image. Our algorithm has extracted high-complexity image feature using CNN. Classification is done by comparing prediction of two classifiers and selecting the label of that classifier, which has a higher confidence value. This combination of two classifiers has allowed us to increase the overall accuracy of CNN from 96.58% by 1.04% up to 97.62%. The best results are achieved when combining the CNN and Discriminant classifiers. Practical implications include improvement of embryo selection process for in vitro fertilization.

Introduction

Medical doctors use different artificial fertilization methods, such as insemination or in vitro fertilization (IVF) to have successful pregnancies. IVF is used to treat infertility if medicaments or insemination does not help. In case of IVF, multiple embryos are fertilized and are grown in vitro at the same time. The success of this method relies on the selection of the most viable embryo. Embryologists select embryos by visual inspection, which requires time and is prone to error. The pregnancy rate in one cycle of IVF and embryo transfer can be as high as 60%, but some couples fail repeatedly and need multiple IVF cycles [1]. Liu et al. present time-lapse deselection model involving Qualitative deselection parameters, which included poor conventional day 3 morphology, and abnormal cleavage patterns identified via time-lapse monitoring [2]. Time-lapse microscopy (TLM) has provided new tools for embryo image inspection. These machines are used to continuous monitoring of embryos in different layers and capturing images of embryo evolution, aiming to measure embryo's development stage duration and inspect embryo cell shape without removing embryos from their growth environment. Embryo development from fertilization to 4-cell stage could last up to 48 h and to 8-cell stage - 72 h. Duration in 2-cell and 4-cell stage often correlates with the quality of the embryo and therefore this data could be used as one of the decision-making facts for the best embryo selection for women fertilization. Accurate measurement of timing parameters for multiple embryos require automated tools to track the duration of embryo development stage. This tool should give a precise prediction and cope with a diverse set of challenges such as deforming cell shapes, poor visual features and similarities between embryos at different stages, which increases as the number of cells in an embryo increases. Moreover, because of two-dimensional images a part of the information between layers is lost. Additionally, embryo growth can be influenced by mutations such as fragmentation, reverse cleavage or abnormal divisions, which may occur between development stages. Samples of images from our data set are displayed in Figs. 1 (“normal” conditions) and 2 (“fragmentation” conditions, illustrating uneven division of the cells of the embryo.). The higher the degree of fragmentation is, the lower the likelihood of pregnancy becomes so it is very important to detect and distinguish such conditions for successful IVF operations.

The quantitative and systematic analysis of embryonic cell dynamics from in vivo 3D time-lapse image data sets remains one of the major challenges in the development of automated biology analysis solutions, requiring high fidelity in space and time [3]. Cell sequencing can be used for exploration of biological systems with unprecedented resolution, e.g. sequencing the genome of individual cells can reveal somatic mutations and allows for the investigation of clonal dynamics [4]. Tools such as the BioEmergences workflow [5] enable determination of the nucleus center, linkage and mitosis detection, etc. to reconstruct the lineage trees, which, unfortunately, is often not enough for the automated evaluation of embryo quality. Image processing solutions, such as ImageJ, MIPAV, VisSeg [6], etc. can be used for the segmentation of study cell or cell nuclei subjects from 2D images or 3D image stacks and then using that data for measurement and classification of the segmented objects, allowing for the effective quantitative analysis [7]. Often this is realized by analyzing the input image based on gradient and normal directions in the proximity of the detected seed points, which are then processed by a selected global thresholding method [8] or gradient flow tracking and grouping [9], which, however, does not always result in satisfactory accuracy. Signal-to-noise ratio can be further improved by an edge-preserving filtering, after which a cell shape can be reconstructed by the Subjective Surfaces algorithm [10]. Mosaliganti et al. [11] suggest segmenting cells from 3D membrane images even in dense tissues by detection of local membrane planes, filling of structural gaps, and region segmentation, determining segment touching cells to quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Stegmaier et al. [12] developed a Real-time Accurate Cell-shape Extractor (RACE) image analysis method for automated three-dimensional cell segmentation in large-scale images by extracting cell-shape information from entire embryos imaged with confocal and light-sheet microscopy, often faster and more accurate in comparison with other methods. Lou et al. [13] showcase the application of modular interactive nuclear segmentation for counting cells and fluorescent intensity measurements of 2D and 3D image data. The method while effective is very much time consuming in calculating the model data. Alternatively, a linear chain Markov model [14] can be used to estimate the number and location of cells at each time step, allowing for reliable detection and localization of cells up to the four-cell stage. Benchmark report [15] evaluates different Cell Tracking methods based on tracking by detection and tracking by model evolution and determines that the most effective classical technology is Gaussian band pass filtering with seeded k-means clustering by merging segments without tracks into adjacent segments with tracks.

Embryo cell classification task itself is not necessarily new, but the potential and need for high accuracy/performance ratio remains, especially when targeting implementations capable of standalone performance in incubators themselves. Wang et al. [16] present a multi-level method for embryo stage classification to identify the number of cells of early human embryo development. A proposed method employs a rich set of hand-crafted and automatically learned embryo features for classification and avoids explicit segmentation or tracking of individual embryo cells. Khan in his works presents different methods to automatically predict the number of cells at each time point. Using conditional random field [17] he compactly encodes various aspects of the evolving embryo and estimates the number of cells at each time step via exact inference. In another study, the author use a linear chain Markov model [18] that estimates the number and location of cells at each time step. The state space for each time step is derived from a randomized ellipse-fitting algorithm that attempts to find individual cell candidates within the embryo. These cell candidates are combined into embryo hypotheses, and the algorithm finds the most likely sequence of hypotheses over all time steps. In his later studies, Khan presents Deep Convolutional Neural Network (CNN) model, trained to count cells from raw microscopy images. The CNN he applies [19] estimates the number of cells in a developing embryo up to the 5-cell stage.

In this paper, we aim to automatically predict of development stage (1-cell, 2-cell, 4-cell, 8-cell) with a minimal number of false predictions, directly in the incubator machine itself, providing a full automated monitoring tool. We have based this algorithm on CNN as a classifier and feature extractor. Multiple classifiers have been evaluated to achieve the best accuracy and the least log loss value. They have been combined in our hybrid system to predict the image class by voting. A combination of two classifiers has demonstrated better results than using only one classifier for embryo development stage prediction (as in other approaches).

Compared to the above-discussed approaches, our approach differs in a more effective feature extraction and prediction stage implementation. We have extracted features to perform classification from a CNN and used them for the classifier training processes and prediction. The main novelty is a hybrid approach of combining multiple classifiers to achieve the practical task of the highest possible prediction accuracy of embryos within a dish in an incubator. The algorithm compares selected confidence values of the presented labels and sets a test sample label depending on the confidence value.

Section snippets

Method

We propose EMbryo image Classification Algorithm (EMCA) for automated time-lapse incubators. The EMCA algorithm consists of two parts: feature extraction part and classification part. To eliminate the error, which often derives from the position at which a Petri dish was put into a Time Lapse incubator and to increase an available data set generalization features, data augmentation has been done. Each image has been multiplied and rotated by 45° angle. After the augmentation process, our 3000

Results

The algorithm has been evaluated using the imagery of ten embryos. Embryos were grown in Petri dish at ESCO Miri TL Time Lapse Incubator [27]. Their growth process has been monitored from fertilization until the third day of evolution. Images were captured with 5-minute sample time in six focal planes at 874 × 874 resolution. All images were labelled into five classes: 1, 2, 4, 8 or no embryo in a Petri dish. A data set of ten embryos was divided into training and testing parts. One hundred

Discussion

The proposed EMCA algorithm, based on a combination of convolutional neural network (CNN) and the discriminant-based classifier class predictor by voting has achieved almost 100% prediction accuracy at one or two cell embryo classification, which is comparable to the results of other researchers [16], [18]. For the task of a prediction of four-cell class, our method outperforms Wang [16], Khan CRF5 [18] and CNN algorithms (see Table 2). The EMCA method is able to achieve a 93.24% and 93.58%

Declaration of Competing Interest

The authors declare no conflicts of interest.

Acknowledgments

This research article is supported by and build upon the Incubator MIRI-TL hardware and cell imagery data provided by JSC ESCO Medical. The research was made with the support of the InnoITeam Center of Excellence.

Ethics

The permit for ethical studies for using the human subject related materials was issued by the Ethics Committee of the Faculty of Informatics, in Kaunas University of Technology. The number of a permit is: IFEP201706-3.

References (28)

  • G. Li et al.

    3D cell nuclei segmentation based on gradient flow tracking

    BMC Cell Biol

    (2007)
  • C. Zanella

    Cells segmentation from 3-D confocal images of early Zebrafish Embryogenesis

    IEEE Trans Image Process

    (2010)
  • K.R. Mosaliganti et al.

    ACME: automated cell morphology extractor for comprehensive reconstruction of cell membranes

    PLOS Comput Biol

    (2012)
  • A. Khan et al.

    A linear chain markov model for detection and localization of cells in early stage embryo development

  • Cited by (0)

    View full text