Elsevier

Neurocomputing

Volume 470, 22 January 2022, Pages 130-138
Neurocomputing

OASIS: One-pass aligned atlas set for medical image segmentation

https://doi.org/10.1016/j.neucom.2021.10.114Get rights and content

Abstract

Medical image segmentation is a fundamental task in medical image analysis. Despite that deep convolutional neural networks have gained stellar performance in this challenging task, they typically rely on large labeled datasets, which have limited their extension to customized applications. By revisiting the superiority of atlas based segmentation methods, we present a new framework of One-pass aligned Atlas Set for Images Segmentation (OASIS). To address the problem of time-consuming iterative image registration used for atlas warping, the proposed method takes advantage of the power of deep learning to achieve one-pass image registration. In addition, by applying label constraint, OASIS makes the registration process focus on the regions to be segmented for improving the performance of segmentation. Furthermore, instead of using image based similarity for label fusion, which can be distracted by the large background areas, we propose a novel strategy to compute the label similarity based weights for label fusion. Our experimental results on the challenging task of prostate MR image segmentation demonstrate that OASIS is able to significantly increase the segmentation performance compared to other state-of-the-art methods.

Introduction

Medical image segmentation is an essential part of medical image analysis. Accurate segmentation of medical images is of great significance for clinical practice, especially for disease diagnosis and treatment planning [1], [2], [3]. For instance, prostate image segmentation is useful for prostate cancer radiotherapy planning and guidance [4]. In the past several years, deep convolutional neural networks (CNNs) have obtained impressive progress in medical image segmentation due to their powerful hierarchical representation ability. However, training such networks usually requires large amount of training data with corresponding segmentation label, which is difficult to obtain due to the needed expertise and highly intensive labor. Given relative small datasets, degraded performances are usually observed when segmenting anatomical structures with large appearance and shape variation.

Before the deep learning era, atlas based segmentation methods were often used for medical image segmentation [5], [6], [7]. That is based on the fact that structures and organs share large similarity in appearance and shapes across subjects. Based on this characteristic, atlas based method be developed. Atlas based methods segment images by fusing the label of similar images, i.e. atlases, through image alignment [8]. Such segmentation algorithms have two major steps, atlas selection and label fusion. The former is to select a few most similar images from the training data for a target image as atlases, which relies on similarity measurement and ranking. Label fusion is to fuse the warped atlas labels after image registration to segment the target image.

However, several key factors have limited the effectiveness of atlas based segmentation methods. Fist, each atlas has to be registered with the target image. Classical iterative deformable registration can be computational intensive, which slows down the entire segmentation process. Second, the registration considers the whole image equally, which results in the registration performance being dragged by less relevant background. Third, label fusion is based on the similarity between the target and registered images rather than the region of interest, which further degrades the segmentation accuracy.

To overcome above challenges, in this paper, we try to infuse the power of deep learning into atlas based methods, and propose a method named OASIS – One-pass aligned Atlas Set for Image Segmentation. Specifically, the contributions of our method are listed as below:

  • Typically each atlas has to be registered with each target image every time, which is time-consuming and computational intensive. To overcome the problem, the proposed OASIS employs a label constrained spatial transform model (STM) for one-pass image registration, which also allows the alignment process to be focused on the regions to be segmented and further improve the performance.

  • Instead of fusing atlas labels weighted by using similarity between atlas images and test image, to reduce the distraction of background on label fusion, we propose a novel fusion strategy taking the contribution of each atlas label. It is achieved by measuring the similarity of registered labels, which makes the label fusion focusing on the regions to be segmented rather than the whole image. The experimental results show that the proposed fusion strategy significantly enhances the accuracy of segmentation.

  • It is worth noting that OASIS is a general framework and can be easily extended to other medical image analysis tasks, especially those with limited training data. Experimental results on the challenging task of prostate MR image segmentation demonstrate that our proposed OASIS is able to significantly increase the segmentation performance compared to other state-of-the-art methods.

The rest of this paper is organized as follows. Section 2 briefly reviews the related works on medical image segmentation and atlas based methods. Section 3 describes the proposed OASIS in detail. Section 4 presents and discusses the performance of the proposed method through various experiments on prostate MRI image segmentation. Finally, several concluding remarks are drawn in Section 5.

Section snippets

Related works

In this section, we briefly review the related works on medical image segmentation and atlas based methods.

Method

In this section, we first give an overview of proposed OASIS framework and then describe the detail of each part.

Dataset

In our work, the MICCAI 2012 Prostate MR Image Segmentation (PROMISE12) challenge1 dataset, a benchmark for evaluating algorithms of segmenting the prostate from MR images, is used for the evaluation. This dataset includes in total 50 transversal T2-weighted MR images of the prostate, which are a representative set of the types of prostate MR images from multiple vendors and have different acquisition protocols and variations in voxel size, dynamic range,

Conclusion

In this paper, inspired by atlas based segmentation methods, an effective medical image segmentation framework: One-pass aligned Atlas Set (OASIS) is proposed for medical image segmentation. Benefits from the superiority of deep learning, a one-pass image registration model is designed to overcome the problem of time-consuming iterative image registration in atlas based methods. Furthermore, instead of determining the weight of atlas labels by using similarity between atlas images and test

CRediT authorship contribution statement

Qikui Zhu: Conceptualization, Methodology, Writing – original draft. Yanqing Wang: Formal analysis, Resources, Data curation. Bo Du: Writing – review & editing, Supervision, Funding acquisition. Pingkun Yan: Writing – review & editing, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grants 61822113 and the Science and Technology Major Project of Hubei Province (Next-Generation AI Technologies) under Grant 2019AEA170.

Qikui Zhu received the B.S. degree and the Ph.D. degree from School of computer science and technology, Wuhan University, Wuhan, China. His research focuses on computer vision, pattern recognition, machine learning, and their applications in medical imaging.

References (44)

  • H. Shan et al.

    3-d convolutional encoder-decoder network for low-dose ct via transfer learning from a 2-d trained network

    IEEE Trans. Med. Imaging

    (2018)
  • H. Chao et al.

    Deep learning predicts cardiovascular disease risks from lung cancer screening low dose computed tomography

    Nat. Commun.

    (2021)
  • H. Shan et al.

    Synergizing medical imaging and radiotherapy with deep learning

    Mach. Learn.: Sci. Technol.

    (2020)
  • D. Shen et al.

    Segmentation of prostate boundaries from ultrasound images using statistical shape model

    IEEE Trans. Med. Imaging

    (2003)
  • B. Schipaanboord et al.

    Can atlas-based auto-segmentation ever be perfect? insights from extreme value theory

    IEEE Trans. Med. Imaging

    (2019)
  • N.C. Andreasen et al.

    Automatic atlas-based volume estimation of human brain regions from MR images

    J. Comput. Assist. Tomogr.

    (1996)
  • H. Yang, J. Sun, H. Li, L. Wang, Z. Xu, Deep fusion net for multi-atlas segmentation: application to cardiac MR images,...
  • P. Yan et al.

    Label image constrained multiatlas selection

    IEEE Trans. Cybern.

    (2015)
  • A. Esteva et al.

    Dermatologist-level classification of skin cancer with deep neural networks

    Nature

    (2017)
  • Q. Zhu et al.

    Exploiting interslice correlation for mri prostate image segmentation, from recursive neural networks aspect

    Complexity

    (2018)
  • A. Mortazi et al.

    Cardiacnet: Segmentation of left atrium and proximal pulmonary veins from mri using multi-view cnn

  • L. Yu et al.

    Volumetric convnets with mixed residual connections for automated prostate segmentation from 3d mr images

    AAAI

    (2017)
  • Cited by (0)

    Qikui Zhu received the B.S. degree and the Ph.D. degree from School of computer science and technology, Wuhan University, Wuhan, China. His research focuses on computer vision, pattern recognition, machine learning, and their applications in medical imaging.

    Yanqing Wang is a doctor working in Renmin Hospital of Wuhan University, Wuhan, China. Her interest researches include gynecological tumors, Physical Plasma.

    Bo Du received the B.S. degree and the Ph.D. degree in Photogrammetry and Remote Sensing from State Key Lab of Information Engineering in Surveying, Mapping and Remote sensing, Wuhan University, Wuhan, China, in 2005, and in 2010, respectively. He is currently a professor with the School of Computer, Wuhan University, Wuhan, China. He is currently a senior member of IEEE. He received the best reviewer awards from IEEE GRSS for his service to IEEE Journal of Selected Topics in Earth Observations and Applied Remote Sensing (JSTARS) in 2011 and ACM rising star awards for his academic progress in 2015. He was the Session Chair for both International Geoscience And Remote Sensing Symposium (IGARSS) 2016 and the 4th IEEE GRSS Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS). He also serves as a reviewer of 20 Science Citation Index (SCI) magazines including IEEE TGRS, TIP, JSTARS, and GRSL.

    Pingkun Yan is an Assistant Professor at the Department of Biomedical Engineering at Rensselaer Polytechnic Institute (RPI). Before joining RPI, he was a Senior Scientist of Philips Research working at the clinical site at the National Institutes of Health (NIH). His research focuses on translational medical imaging informatics and image-guided intervention using artificial intelligence and machine learning techniques through close collaboration with clinicians. He has published over 80 peer-reviewed articles in well-recognized journals including Nature Communications, PNAS, Medical Image Analysis, IEEE T-MI, IEEE T-CSVT, IEEE T-BME, IEEE T-ITB, Medical Physics, and top international conferences including MICCAI, ICCV, CVPR, and ISBI. His publications have been cited more than 6,000 times. His research work has also resulted in 10+ patent filings and issued patents.

    View full text