Automatic construction of filter tree by genetic programming for ultrasound guidance image segmentation

https://doi.org/10.1016/j.bspc.2022.103641Get rights and content

Highlights

  • Design a specific predefined function set for processing UGI data.

  • Introduce position-determined function (PDF) to incorporate some prior knowledge.

  • Use a linearly decreasing mutation rate to increase the population diversity.

  • Design a bloat penalty term to reduce the bias for small-sized individuals.

Abstract

Segmentation of ultrasound guidance images (UGIs) is a critical step in ultrasound-guided high intensity focused ultrasound (HIFU) therapy. However, the low signal-to-noise ratio characteristic of UGIs makes it difficult to acquire enough annotations. This paper proposes a novel genetic programming-based approach to achieve automatic construction of an image filter tree (IFT) for UGI segmentation since genetic programming has a natural advantage in training on small datasets. In the new approach, a set of predefined functions are adapted with better anti-noise performance to deal with noise interference. Moreover, a position-determined function is designed for incorporating preoperative information in each IFT to form a closed-loop system thereby facilitating the segmentation process. The optimal IFT evolved by genetic programming, along with a preprocessing step and a postprocessing step, constructs the pipeline for the segmentation of UGIs. The quantitative evaluation of the segmentation results shows the mean true positive rate, the mean false positive rate, the mean intersection over union, the mean norm Hausdorff distance and the mean norm maximum average distance are found to be 94.86%, 6.72%, 89.14%, 3.20% and 0.83%, respectively, outperforming the popular convolutional neural network-based segmentation methods. The segmentation results reveal that the evolved IFT can achieve accurate segmentation of UGIs and indicate that the proposed approach can be a promising option for medical image segmentation when there are only a few training samples available.

Introduction

High intensity focused ultrasound (HIFU) therapy is an emerging nonradioactive in vitro treatment technology, with its non-invasive characteristic greatly alleviating the suffering of tumor patients [1], [2], [3], [4], [5], [6]. Ultrasound guidance is widely adopted in HIFU therapy for real-time monitoring of tumor or organ movement throughout the treatment process, and the acquired ultrasound guidance images (UGIs) need to be segmented accurately and fast for subsequent tumor ablation. However, due to the influence of the transducer’s mounting mode in the imaging process [7], the image quality of UGIs is more degraded than traditional ultrasound images, which makes their manual segmentation a challenging task and the annotations difficult to acquire. An accurate computer-aided segmentation method will be a promising alternative to troublesome manual operation and increase the treatment efficiency.

Over the past decade, a lot of traditional ultrasound image segmentation methods have been proposed, which can be roughly classified into two categories, contour-based and region-based [8], [9]. Contour-based methods [10], [11], [12] are most popular with the active contour model. The basic idea of the active contour model is the curve evolution technique, which converges the initial contour to the final contour by minimizing an energy function. Active contour model-based methods can intrinsically incorporate powerful shape prior by means of imposing constraint on the shape of the segmentation, nevertheless, it is usually difficult for them to converge to global optimum without a good initialization due to the speckle noise-induced local minimums [13], [14]. Region-based methods [7], [15], [16], [17], [18] usually oversegment the ultrasound image into several superpixels, then the same type of superpixels are merged into one region and the tumor region is selected from those regions. Since a superpixel usually consists of a bunch of similar pixels, some texture features can be extracted to represent this superpixel for clustering or for classifying. These methods can achieve accurate segmentation of UGIs, but are not end-to-end. Therefore, a more automatic method for developing a model for UGI segmentation is still in need.

Some machine learning methods, such as convolutional neural network (CNN) [19], [20], [21], [22], [23] and genetic programming [24], [25], [26], [27], can be trained end-to-end and perform well in the field of computer-aided diagnosis [28], [29]. However, due to the aforementioned reasons, the available annotated UGI data is very limited, which will have a great influence on CNN training. On one hand, the training of the millions of convolution kernels, which brings CNN great expression ability, needs a large amount of data. Scarce training data makes CNN easy to get overfitting [30]. On the other hand, a larger dataset is needed to discover the latent informative features with the increasing of noise interference. These two problems make CNN-based segmentation methods difficult to achieve satisfactory performance, but can be solved by utilizing genetic programming. In tree-based genetic programming, each individual is represented by a tree, where the root and internal nodes are functions selected from a predefined function set, and the leaf nodes are terminals [31]. The functions in the predefined function set are fixed and thus the only thing that needs to be optimized is the way that functions are connected, which will greatly reduce the need of training data. As for the noise interference, since the functions in the predefined function set can be task-specific, a function set that have better anti-noise performance can be designed to deal with this problem. In general, genetic programming has a natural advantage in training on small dataset.

Genetic programming has been employed in some computer vision tasks, although mostly in the field of natural image processing. An early attempt was made by Nishikawa et al. [32] to automatically develop an image filter for detecting major cracks in bridge images. Later, by using a similar function set, Keita and Tomoharu [33] achieved the construction of different image transformations that can convert photographed images into variously stylized painterly images. Another work worth mentioning is done by Paris et. al. [34], in which they developed a computational system for automatically constructing hardware device-friendly image filters from a sequence of operators (mainly morphological and logical). They pointed out that this genetic programming system could be an alternative solution usually manually done by human experts. During the past few years, Zhang et al. [31], [35] have done a lot of works for investigating using genetic programming for image classification. These genetic programming-based approaches can automatically learn informative features for different image classification tasks but have not been applied to image segmentation. In summary, genetic programming can be applied to vision tasks and works well with small datasets. Moreover, the function set can be customized for different cases. Therefore, it has the potential to be utilized in the field of medical image processing.

Inspired by the aforementioned works [24], [31], [32], [33], [34], [35], we investigate leveraging the end-to-end evolving system – genetic programming for dealing with the challenging UGI segmentation problem with only a few training samples. By doing so, two advantages can be easily observed – First, the process of building a model for accurate and fast segmentation of heavily speckled UGIs can be more automated. Second, genetic programming only needs to optimize the way that filters are connected, which requires less data for training. Specifically, a set of classical filters are adapted to cope with the speckle noise in UGIs and some prior knowledge, which can only be used during postprocessing in some other applications, can also be embedded in each IFT to form a closed-loop system and thus facilitate the segmentation process. Moreover, a linearly decreasing mutation rate, a modified bloat prevention term, and the validation process are also introduced in our approach. The proposed approach is validated on our UGI dataset to verify its feasibility. The main contribution of this paper is introducing genetic programming for UGI segmentation with few annotations, on which basis three improvements are made:

  • 1.

    Design a specific predefined function set for processing UGI data. Considering the heavy noise and blurry boundary in UGIs, more filters for smoothing and edge enhancement are included. Moreover, the size of some filters is enlarged since speckle noise in UGIs usually covers several pixels. Introducing more anti-noise filters can prevent the IFTs from becoming too large, and thus reducing training time.

  • 2.

    Introduce position-determined function (PDF) to incorporate some prior knowledge which can only be used during postprocessing in other methods. Some prior knowledge such as tumor size information obtained during preoperative imaging is adapted into a function and then placed at the root node of each tree. By doing so, each IFT becomes a closed-loop system. Embedding this information in the IFTs can simplify the segmentation problem to some extent.

  • 3.

    Improve the training process based on [32]. A linearly decreasing mutation rate is designed to increase the diversity of the population in the initial training stage and the bloat penalty term is modified to reduce the bias for small-sized individuals. Moreover, the validation process is also introduced in our method to prevent overfitting.

The remainder of this paper is organized as follows. Section 2 describes the detailed procedures of the proposed method and the metrics for the evaluation of the experiment results. Section 3 shows the experiment results for tumor segmentation. Section 4 is our discussion about the segmentation results and the advantages and limitations of our proposed methods. Conclusions are drawn in Section 5.

Section snippets

Materials and methods

To solve the challenging UGI segmentation problem, we propose an end-to-end training framework for automatically evolving an IFT to approximate the transformation from a UGI to a segmentation mask. First, in the image acquisition step, the UGI dataset used in this task is described. Second, a predefined function set that is specific for our task is illustrated in detail. Third, some improvements to the training process will be made based on [32]. Finally, metrics for evaluating segmentation

Results

Our proposed approach, including preprocessing, genetic programming and postprocessing, is implemented with C++ and executed on Intel(R) Core(TM) i5-3570 CPU (3.40 Hz). None of accelerating techniques such as GPU acceleration or parallel computation is used. Contours drawn by a radiologist and checked by another radiologist were used as the ground truth contours. The filled contours are taken as the ground truth tumor regions.

Discussion

In this paper, we successfully applied genetic programming to handle the UGI segmentation problem with only a few training samples. The predefined function set are properly designed to incorporate more domain knowledge to reduce the need of image data, then the training process of genetic programming will automatically evolve the optimal IFT to approximate the transformation for a preprocessed UGI to a segmentation mask. Segmentation results (Fig. 8) indicate that the obtained optimal IFT,

Conclusion

We developed an end-to-end framework based on genetic programming for automatic construction of an IFT for UGI segmentation with only a few annotated training data, and the evolved optimal IFT by genetic programming can achieve accurate segmentation with less cost time. Quantitative evaluation shows that our proposed approach outperforms the CNN-based segmentation methods and performs more stable on newly acquired UGIs. If successful, our proposed approach has the potential to be applicable in

Funding

This work was supported by the National Basic Research Program of China (Grant No.: 2011CB707900) and the Basic Ability Improvement Project of Young Teachers in Guangxi Universities (Grant No.: 2019KY0816).

Declaration of Competing Interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled, “Automatic Construction of Filter Tree by Genetic Programming for Ultrasound Guidance Image Segmentation”.

References (50)

  • Q. Huang et al.

    Automatic segmentation of breast lesions for interaction in ultrasonic computer-aided diagnosis

    Inf. Sci.

    (2015)
  • H.G. Adelmann

    Butterworth equations for homomorphic filtering of images

    Comput. Biol. Med.

    (1998)
  • F. Ciompi et al.

    Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box

    Med. Image Anal.

    (2015)
  • Y.H. Hsiao et al.

    Clinical Application of High-intensity Focused Ultrasound in Cancer Therapy

    J. Cancer

    (2016)
  • J.E. Kennedy

    High-intensity focused ultrasound in the treatment of solid tumours

    Nat. Rev. Cancer

    (2005)
  • J.E. Kennedy et al.

    High intensity focused ultrasound: surgery of the future?

    Br. J. Radiol.

    (2003)
  • E. Martin et al.

    High-intensity focused ultrasound for noninvasive functional neurosurgery

    Ann. Neurol.

    (2009)
  • D. Zhang et al.

    Segmentation of tumor ultrasound image in HIFU therapy based on texture and boundary encoding

    Phys. Med. Biol.

    (2015)
  • Q. Huang et al.

    Breast ultrasound image segmentation: a survey

    Int. J. Comput. Assist. Radiol. Surg.

    (2017)
  • D. Zhang, Y. Liu, Y. Yang, M. Xu, Y. Yan, Q. Qin, A region-based segmentation method for ultrasound images in HIFU...
  • K. He et al.

    Deep Residual Learning for Image Recognition

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    (2016)
  • K. Simonyan, A.J.C. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, abs/1409.1556...
  • N. Abraham, N.M. Khan, A Novel Focal Tversky Loss Function With Improved Attention U-Net for Lesion Segmentation, 2019...
  • O. Oktay, J. Schlemper, L.L. Folgoc, M.J. Lee, M.P. Heinrich, K. Misawa, K. Mori, S.G. McDonagh, N.Y. Hammerla, B....
  • O. Ronneberger et al.

    U-Net: Convolutional Networks for Biomedical Image Segmentation

    (2015)
  • Cited by (4)

    • Evolutionary mating algorithm

      2023, Neural Computing and Applications
    View full text