Saliency and ballness driven deep learning framework for cell segmentation in bright field microscopic images
Introduction
Cell image analysis is integral in diagnosing, treating, and monitoring patients with various health conditions, and microscopy remains the gold standard for cell analysis. Microscopic images of blood smears, tissue samples, and body fluid samples are analyzed to obtain quantitative information on cell morphology. Cell signature is determined by the shape and size of the cells, morphology of the nucleus, presence of granules, and amount of cytoplasm. These factors may vary depending upon the disease, and microscopic image analysis can be used to assess the variations. Accurate quantification of the cell signature depends on detecting the spatial locations of the cell, and cellular structures in the image (Dimopoulos et al., 2014). It plays a vital role in the detection, treatment, and monitoring of anemias, malaria parasites (Gopakumar et al., 2018), tuberculosis (Simon et al., 2019), eosinophilia, and different types of cancers, including leukemia (Kalmady et al., 2017, Thanmayi et al., 2021).
Traditionally, medical experts manually extract relevant information from microscopy images, which is tedious and time-consuming. Furthermore, the results mainly depend on the technical expertise of the examiner and variability across capturing instruments. Bright-field microscopy is the simplest and cheapest optical microscopy technique used in resource-limited clinics where cost and human expertise are of primary concern (Mualla et al., 2018). Despite its advantages, bright-field microscopy typically exhibits low contrast due to the variation in light absorption by the various biological samples. As a result, analysis of bright-field images remains a challenge. Semi-automated or fully automated systems are required to abstract specific information from the microscopic images in a short amount of time. Although fully automated systems cannot be considered to replace the intelligence of medical professionals, they can facilitate faster and more accurate decisions on case variants (Shaukat et al., 2020). Therefore, one of the focus areas in the analysis of microscopy images is the automated detection and segmentation of cells, and cellular structures (Song et al., 2018). This will help to quantify the severity of many diseases, including malaria and anemia. It also helps in differential cell counting to assess an individual’s health.
Over the past few years, advances in deep learning techniques have outweighed the classical image analysis techniques in the cell and nuclei segmentation (Xing et al., 2018). It is not an easy task due to challenges such as the heterogeneous shape of cells in the image, intracellular variability, and the occurrence of cells as a cluster. The imaging artifacts, overlapping and touching nuclei and cells, and the appearance of cells and nuclei as dense regions are still some of the open problems in automated image segmentation. Besides, the availability of publicly accessible annotated data that can be used to learn the model is insufficient (Dimopoulos et al., 2014). Furthermore, the performance of segmentation algorithms is significantly affected due to the low contrast property of bright-field images.
Several studies based on deep learning techniques (Ali et al., 2021, Moen et al., 2019) have shown promising results for bright-field images, but there is still a wide gap in the performance compared to fluorescence images. Therefore, there is still a great demand for precise, standardized, and robust whole cell segmentation algorithms for measuring the properties and subcellular structures in cell images, especially in bright-field microscopy images (Al-Kofahi et al., 2018). Owing to the simplicity and cost-effectiveness of bright field microscopy, developing an accurate cell segmentation framework will be a beneficial but challenging task. In this research, we propose a neural network architecture that uses a novel loss function to segment out cells accurately, even if there are overlapping cells in the image. Unlike other prevailing segmentation models, the proposed model learns deep features from original images and their corresponding perceptual feature maps. The perceptual features are derived from the data without any manual effort, and a set of specific features, namely, saliency, ballness, and orientation, are selected from those using a voting mechanism. These act as prior information and are used to produce feature-enhanced images for the proposed model. We have shown that the proposed framework outperformed the state-of-the-art models in semantic cell segmentation ability in Section 4(Table 9).
The main contributions of this research are listed below and are discussed in detail in Section 3.
- 1.
A novel feature fusion approach that combines the power of deep learning-based segmentation networks and perceptual features derived from the images is designed to improve segmentation performance.
- 2.
The perceptual features, namely, saliency, ballness, and orientation, are generated from the images without leveraging any domain knowledge using the tensor voting framework (Medioni et al., 2005).
- 3.
The perceptual features are provided as prior information to the proposed U-shaped encoder–decoder model to improve the instance cell separation capability of the model.
- 4.
A novel combination loss function is formulated from the focal loss and Jaccard coefficient to take care of the class imbalance problem of the dataset.
In this paper, the efficacy of the SBU-net is experimentally proved using two bright-field microscopy datasets and one fluorescence microscopy dataset. SBU-net showed significant improvement in the segmentation metrics and can lead to more research on the effective use of perceptual features for segmentation.
The remainder of this paper is structured as follows: Section 2 discusses the related works. Section 3 describes the proposed methodology in detail, followed by the experimentation and results in Section 4, discussion in Section 5 and the conclusion in Section 6.
Section snippets
Related works
The automation of microscopic image analysis involves the cell localization and segmentation step. It is the most critical and challenging step in the image analysis pipeline. A brief review of the existing image segmentation methods is conducted, and the methods are categorized into three: traditional, deep learning-based segmentation methods and prior integrated deep learning methods. For this review, we focused on the significant studies that brought architecture-level innovations for image
Methodology
Accurate segmentation of cellular structures is essential for cell counting and determining cell morphology, where the segmented cells are used for classification and cell tracking. Automated segmentation of cells from low contrast microscopic images becomes problematic when the cells occur as dense regions with overlapping or touching cells. To enhance the performance of deep neural networks and to address these issues, some methods have used domain knowledge as prior information (Tofighi et
Experimental results
The proposed model is evaluated to demonstrate its applicability to cell segmentation. The current study uses bright-field microscopic images to test the model for pixel-level segmentation tasks. We have also tested the architecture on a fluorescence dataset to assess the generalization capability on a different microscopy type. The model is compared against the state-of-the-art models (Zhou et al., 2019, Schlemper et al., 2019, Chaurasia and Culurciello, 2017, Ronneberger et al., 2015) for
Discussions
The research gaps identified and challenged in the current research are heterogeneous shapes of cells, intracell variability, and the occurrence of cells as clusters. Segmenting cells from low-contrast bright-field images is also tricky using standard image processing operations. There is also a requirement for deep models that can work with few annotated training data. Hence, segmenting cells from microscopy images, mainly from bright-field microscopy, is an open problem in the literature. The
Conclusion
The trade-off between the model complexity, accuracy, and the amount of training data is essential in developing segmentation models, especially deep learning models. It is important to integrate the strong theory and mathematically sound techniques developed by the computer vision community in the last decades to improve the performance of these deep learning models in constrained resource-limited settings. It will also add meaning to the reported high performance of the deep learning model,
Future work
The model’s capability to segment cells from bright-field microscopic images is established in the study. The generalization capability is also assessed using the fluorescence dataset. One research direction extending this work could be adapting the network to address multi-cell segmentation problems. Instance segmentation is particularly useful for cell counting applications and for measuring the morphological properties of cellular structures. The inclusion of perceptual features for instance
CRediT authorship contribution statement
S.B. Asha: Methodology, Software, Investigation, Writing – original draft. G. Gopakumar: Conceptualization, Writing – review & editing, Supervision. Gorthi R.K. Sai Subrahmanyam: Writing – review & editing, Supervision.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (68)
- et al.
Evaluating very deep convolutional neural networks for nucleus segmentation from brightfield cell microscopy images
SLAS DISCOVERY: Adv. Sci. Drug Discov.
(2021) - et al.
Dunet: A deformable network for retinal vessel segmentation
Knowl.-Based Syst.
(2019) - et al.
Bottleneck feature supervised u-net for pixel-wise liver and tumor segmentation
Expert Syst. Appl.
(2020) - et al.
Robust membrane detection based on tensor voting for electron tomography
J. Struct. Biol.
(2014) - et al.
An evaluation metric for image segmentation of multiple objects
Image Vis. Comput.
(2009) - et al.
Attention gated networks: Learning to leverage salient regions in medical images
Med. Image Anal.
(2019) - et al.
Stacked dilated convolutions and asymmetric architecture for u-net-based medical image segmentation
Comput. Biol. Med.
(2022) - et al.
Attentive neural cell instance segmentation
Med. Image Anal.
(2019) - et al.
A deep learning-based algorithm for 2-d cell segmentation in microscopy images
BMC Bioinformatics
(2018) - et al.
Disease diagnosis system using iot empowered with fuzzy inference system
Comput. Mater. Contin.
(2022)