Saliency and ballness driven deep learning framework for cell segmentation in bright field microscopic images

https://doi.org/10.1016/j.engappai.2022.105704Get rights and content

Abstract

Cell segmentation is the most significant task in microscopic image analysis as it facilitates differential cell counting and analysis of sub-cellular structures for diagnosing cytopathological diseases. Bright-field microscopy is considered the gold standard among different types of optical microscopes used for cell analysis due to its simplicity and cost-effectiveness. However, automatic cell segmentation in bright field microscopy is challenging due to imaging artifacts, poor contrast, overlapping cells, and wide variability of cells. Also, the availability of labeled bright-field images is limited, further constraining the research in developing supervised models for automated cell segmentation. In this research, we propose a novel cell segmentation framework termed Saliency and Ballness driven U-shaped Network (SBU-net) to overcome these challenges. The proposed architecture comprises a novel data-driven feature fusion module that enhances the perceivable structure of cells using its saliency and ballness features. This, together with an encoder–decoder model having dilated convolutions and a novel combination loss function, captured the global context of cell structures and produced accurate cell segmentation results. SBU-net is evaluated using two publicly available bright-field datasets of T cells and pancreatic cancer cells. The model is subjected to 5-fold cross-validation and outperformed state-of-the-art models by producing mean Intersection over Union (IoU) scores of 0.804, 0.829, and mean Dice of 0.891, 0.906, respectively. The architecture was also tested on a fluorescent dataset to see how well it could generalize, and it came out with a mean IoU of 0.892 and a mean Dice of 0.948, outperforming other models reported in the literature.

Introduction

Cell image analysis is integral in diagnosing, treating, and monitoring patients with various health conditions, and microscopy remains the gold standard for cell analysis. Microscopic images of blood smears, tissue samples, and body fluid samples are analyzed to obtain quantitative information on cell morphology. Cell signature is determined by the shape and size of the cells, morphology of the nucleus, presence of granules, and amount of cytoplasm. These factors may vary depending upon the disease, and microscopic image analysis can be used to assess the variations. Accurate quantification of the cell signature depends on detecting the spatial locations of the cell, and cellular structures in the image (Dimopoulos et al., 2014). It plays a vital role in the detection, treatment, and monitoring of anemias, malaria parasites (Gopakumar et al., 2018), tuberculosis (Simon et al., 2019), eosinophilia, and different types of cancers, including leukemia (Kalmady et al., 2017, Thanmayi et al., 2021).

Traditionally, medical experts manually extract relevant information from microscopy images, which is tedious and time-consuming. Furthermore, the results mainly depend on the technical expertise of the examiner and variability across capturing instruments. Bright-field microscopy is the simplest and cheapest optical microscopy technique used in resource-limited clinics where cost and human expertise are of primary concern (Mualla et al., 2018). Despite its advantages, bright-field microscopy typically exhibits low contrast due to the variation in light absorption by the various biological samples. As a result, analysis of bright-field images remains a challenge. Semi-automated or fully automated systems are required to abstract specific information from the microscopic images in a short amount of time. Although fully automated systems cannot be considered to replace the intelligence of medical professionals, they can facilitate faster and more accurate decisions on case variants (Shaukat et al., 2020). Therefore, one of the focus areas in the analysis of microscopy images is the automated detection and segmentation of cells, and cellular structures (Song et al., 2018). This will help to quantify the severity of many diseases, including malaria and anemia. It also helps in differential cell counting to assess an individual’s health.

Over the past few years, advances in deep learning techniques have outweighed the classical image analysis techniques in the cell and nuclei segmentation (Xing et al., 2018). It is not an easy task due to challenges such as the heterogeneous shape of cells in the image, intracellular variability, and the occurrence of cells as a cluster. The imaging artifacts, overlapping and touching nuclei and cells, and the appearance of cells and nuclei as dense regions are still some of the open problems in automated image segmentation. Besides, the availability of publicly accessible annotated data that can be used to learn the model is insufficient (Dimopoulos et al., 2014). Furthermore, the performance of segmentation algorithms is significantly affected due to the low contrast property of bright-field images.

Several studies based on deep learning techniques (Ali et al., 2021, Moen et al., 2019) have shown promising results for bright-field images, but there is still a wide gap in the performance compared to fluorescence images. Therefore, there is still a great demand for precise, standardized, and robust whole cell segmentation algorithms for measuring the properties and subcellular structures in cell images, especially in bright-field microscopy images (Al-Kofahi et al., 2018). Owing to the simplicity and cost-effectiveness of bright field microscopy, developing an accurate cell segmentation framework will be a beneficial but challenging task. In this research, we propose a neural network architecture that uses a novel loss function to segment out cells accurately, even if there are overlapping cells in the image. Unlike other prevailing segmentation models, the proposed model learns deep features from original images and their corresponding perceptual feature maps. The perceptual features are derived from the data without any manual effort, and a set of specific features, namely, saliency, ballness, and orientation, are selected from those using a voting mechanism. These act as prior information and are used to produce feature-enhanced images for the proposed model. We have shown that the proposed framework outperformed the state-of-the-art models in semantic cell segmentation ability in Section 4(Table 9).

The main contributions of this research are listed below and are discussed in detail in Section 3.

  • 1.

    A novel feature fusion approach that combines the power of deep learning-based segmentation networks and perceptual features derived from the images is designed to improve segmentation performance.

  • 2.

    The perceptual features, namely, saliency, ballness, and orientation, are generated from the images without leveraging any domain knowledge using the tensor voting framework (Medioni et al., 2005).

  • 3.

    The perceptual features are provided as prior information to the proposed U-shaped encoder–decoder model to improve the instance cell separation capability of the model.

  • 4.

    A novel combination loss function is formulated from the focal loss and Jaccard coefficient to take care of the class imbalance problem of the dataset.

In this paper, the efficacy of the SBU-net is experimentally proved using two bright-field microscopy datasets and one fluorescence microscopy dataset. SBU-net showed significant improvement in the segmentation metrics and can lead to more research on the effective use of perceptual features for segmentation.

The remainder of this paper is structured as follows: Section 2 discusses the related works. Section 3 describes the proposed methodology in detail, followed by the experimentation and results in Section 4, discussion in Section 5 and the conclusion in Section 6.

Section snippets

Related works

The automation of microscopic image analysis involves the cell localization and segmentation step. It is the most critical and challenging step in the image analysis pipeline. A brief review of the existing image segmentation methods is conducted, and the methods are categorized into three: traditional, deep learning-based segmentation methods and prior integrated deep learning methods. For this review, we focused on the significant studies that brought architecture-level innovations for image

Methodology

Accurate segmentation of cellular structures is essential for cell counting and determining cell morphology, where the segmented cells are used for classification and cell tracking. Automated segmentation of cells from low contrast microscopic images becomes problematic when the cells occur as dense regions with overlapping or touching cells. To enhance the performance of deep neural networks and to address these issues, some methods have used domain knowledge as prior information (Tofighi et

Experimental results

The proposed model is evaluated to demonstrate its applicability to cell segmentation. The current study uses bright-field microscopic images to test the model for pixel-level segmentation tasks. We have also tested the architecture on a fluorescence dataset to assess the generalization capability on a different microscopy type. The model is compared against the state-of-the-art models (Zhou et al., 2019, Schlemper et al., 2019, Chaurasia and Culurciello, 2017, Ronneberger et al., 2015) for

Discussions

The research gaps identified and challenged in the current research are heterogeneous shapes of cells, intracell variability, and the occurrence of cells as clusters. Segmenting cells from low-contrast bright-field images is also tricky using standard image processing operations. There is also a requirement for deep models that can work with few annotated training data. Hence, segmenting cells from microscopy images, mainly from bright-field microscopy, is an open problem in the literature. The

Conclusion

The trade-off between the model complexity, accuracy, and the amount of training data is essential in developing segmentation models, especially deep learning models. It is important to integrate the strong theory and mathematically sound techniques developed by the computer vision community in the last decades to improve the performance of these deep learning models in constrained resource-limited settings. It will also add meaning to the reported high performance of the deep learning model,

Future work

The model’s capability to segment cells from bright-field microscopic images is established in the study. The generalization capability is also assessed using the fluorescence dataset. One research direction extending this work could be adapting the network to address multi-cell segmentation problems. Instance segmentation is particularly useful for cell counting applications and for measuring the morphological properties of cellular structures. The inclusion of perceptual features for instance

CRediT authorship contribution statement

S.B. Asha: Methodology, Software, Investigation, Writing – original draft. G. Gopakumar: Conceptualization, Writing – review & editing, Supervision. Gorthi R.K. Sai Subrahmanyam: Writing – review & editing, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (68)

  • AlamT.M. et al.

    A machine learning approach for identification of malignant mesothelioma etiological factors in an imbalanced dataset

    Comput. J.

    (2022)
  • Ayanzadeh, A., Yağar, H.O., Özuysal, z.Y., Okvur, D.P., Töreyin, D., Önal, S., 2019. Cell segmentation of 2d...
  • BohlenderS. et al.

    A survey on shape-constraint deep learning for medical image segmentation

    (2021)
  • Boutillon, A., Borotikar, B., Burdin, V., Conze, P.H., 2020. Combining shape priors with conditional adversarial...
  • Chaurasia, A., Culurciello, E., 2017. Linknet: Exploiting encoder representations for efficient semantic segmentation....
  • Chen, H., Qi, X., Yu, L., Heng, P.A., 2016. Dcan: Deep contour-aware networks for accurate gland segmentation. In: 2016...
  • Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society...
  • DimopoulosS. et al.

    Accurate cell segmentation in microscopy images using membrane patterns

    Bioinformatics

    (2014)
  • DingY. et al.

    Tensor voting extraction of vessel centerlines from cerebral angiograms

  • DongH. et al.

    Automatic brain tumor detection and segmentation using u-net based fully convolutional networks

  • FollainG. et al.

    Combining StarDist and TrackMate example 3 - flow chamber dataset

    (2020)
  • FrankenE. et al.

    An efficient method for tensor voting using steerable filters

  • GopakumarG. et al.

    Framework for morphometric classification of cells in imaging flow cytometry

    J. Microsc.

    (2016)
  • GopakumarG. et al.

    Deep learning applications to cytopathology: A study on the detection of malaria and on the classification of leukaemia cell-lines

  • GopakumarG.P. et al.

    Convolutional neural network-based malaria diagnosis from focus stack of blood smear images acquired using custom-built slide scanner

    J. Biophotonics

    (2018)
  • He, K., Gkioxari, G., Dollár, P., Girshick, R., 2017. Mask r-cnn. In: 2017 IEEE International Conference on Computer...
  • JacquemetG.

    Combining StarDist and TrackMate example 1 - breast cancer cell dataset

    (2020)
  • Kalmady, K.S., Kamath, A.S., Gopakumar, G., Subrahmanyam, G.R.K.S., Gorthi, S.S., 2017. Improved transfer learning...
  • KingmaD.P. et al.

    Adam: A method for stochastic optimization

    (2015)
  • Kong, J., Wang, F., Teodoro, G., Liang, Y., Zhu, Y., Tucker-Burden, C., Brat, D.J., 2015. Automated cell segmentation...
  • Kothari, S., Chaudry, Q., Wang, M.D., 2009. Automated cell counting and cluster segmentation using concavity detection...
  • Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2017. Focal loss for dense object detection. In: 2017 IEEE...
  • LossL.A. et al.

    Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures

    IEEE Trans. Med. Imaging

    (2011)
  • Lux, F., Matula, P., 2019. Dic image segmentation of dense cell populations by combining deep learning and watershed....
  • Cited by (0)

    View full text