Medical image segmentation using a contextual-constraint-based Hopfield neural cube

https://doi.org/10.1016/S0262-8856(01)00039-7Get rights and content

Abstract

Neural-network-based image techniques such as the Hopfield neural networks have been proposed as an alternative approach for image segmentation and have demonstrated benefits over traditional algorithms. However, due to its architecture limitation, image segmentation using traditional Hopfield neural networks results in the same function as thresholding of image histograms. With this technique high-level contextual information cannot be incorporated into the segmentation procedure. As a result, although the traditional Hopfield neural network was capable of segmenting noiseless images, it lacks the capability of noise robustness. In this paper, an innovative Hopfield neural network, called contextual-constraint-based Hopfield neural cube (CCBHNC) is proposed for image segmentation. The CCBHNC uses a three-dimensional architecture with pixel classification implemented on its third dimension. With the three-dimensional architecture, the network is capable of taking into account each pixel's feature and its surrounding contextual information. Besides the network architecture, the CCBHNC also differs from the original Hopfield neural network in that a competitive winner-take-all mechanism is imposed in the evolution of the network. The winner-take-all mechanism adeptly precludes the necessity of determining the values for the weighting factors for the hard constraints in the energy function in maintaining feasible results. The proposed CCBHNC approach for image segmentation has been compared with two existing methods. The simulation results indicate that CCBHNC can produce more continuous, and smoother images in comparison with the other methods.

Introduction

Medical image segmentation serves as an important tool in clinical diagnosis and assessment. The result is useful to doctors for recognizing organs and tissues correctly, thus enhancing their diagnostic efficiency and minimizing their workload in medical image analysis. One challenge associated with image segmentation is the interference resulting from noise and artifacts. To eliminate the interference, incorporating contextual information is considered as one of the most effective approaches.

Another problem associated with image segmentation is that this work is usually extremely time-consuming. Recently, neural networks, with their features of fault tolerance and potential for parallel implementation, have been proposed as alternative approaches [1], [2], [3], [4], [5], [6], [7], [8]. Among them, Williams and Feng [8] embedded Bayesian classification theorem and conditional maximum likelihood into a multi-layer neural network for image segmentation. In their approach, eighteen to twenty-one features including color features, location and texture features, e.g. entropy, contrast and gray-level difference were fed to train the neural network via supervised learning. As we know, the supervised learning needs a set of targets to train the networks based on pre-defined image features. However, for different image modalities because of their different characteristics, the system requires different training images set as well as different image features. In addition, these kinds of neural network need a tedious training process resulting in complex computational time and training time.

On the other hand, the Hopfield neural networks, using unsupervised learning, adeptly precludes the necessity of pre-training the networks [3], [4], [5], [6], [7]. The segmentation using conventional Hopfield neural networks are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [3], [4], [5], [6], [7]. With this approach, the spatial relations of pixels are destroyed and thus, contextual information cannot be incorporated into the network's evolution.

In this paper, we propose a three-dimensional Hopfield neural network, called contextual-constraint based Hopfield neural cube (CCBHNC), for medical image segmentation. Unlike other neural networks [3], [4], [5], [6], [7], CCBHNC uses a three-dimensional architecture with pixel classification implemented on its third dimension. With the three-dimensional architecture, the network is capable of incorporating each pixel's contextual information into a pixel classification, achieving a high-level vision model approach. Consequently, the effect of minor details or noise can be effectively removed, and the drawback of disconnected fractions can be further overcome.

For optimization, constraints usually play a very important role in solving the problems. There are two types of constraints: soft constraints and hard constraints. Soft constraints are used to help the network to obtain more desirable results [9], [10]. It is not necessary to satisfy all soft constraints as long as a proportional balance is retained among them. On the contrary, hard constraints enforce conditions that enable the network to reach a feasible solution. Therefore, the hard constraints must be absolutely satisfied. In the past, some hard constraints had to be added to the energy function of the Hopfield networks for it to reach a reasonable solution. However, it has proved to be very difficult to determine the weighting factors between hard constraints and the problem-dependent energy function. Improper parameters would lead to unfeasible solutions. Recently, Chung et al. [11] proposed a concept of competitive learning to exclude the hard constraints in the Hopfield network and eliminate the issue of determination of weighting factors. This proposed competitive learning rule is also adopted in CCBHNC.

In the proposed CCBHNC, two soft constraints are also introduced in the course of image segmentation. One gives the restriction in which the gray levels assigned to the same class have the minimum Euclidean distance measure. The other uses contextual information to force neighboring pixels to be classified into the same class. Given the two soft constraints, CCBHNC takes into consideration both the global gray level distribution and contextual information of pixels to obtain desirable segmentation results from noisy images.

To validate its effectiveness, the CCBHNC has been tested on various kinds of medical images including CT, MRI and SPECT. The simulation results show that CCBHNC can produce more continuous and smoother images in comparison with the dynamic thresholding [13] and CHNN [6] methods. Furthermore, the adoption of the competitive learning rule in CCBHNC relieves us of the burden of determining the proper values for the weighting factors and facilitates fast converge of the network.

The remainder of this paper is organized as follows. In Section 2, the architecture and algorithm of CCBHNC is proposed. In Section 3, mathematical derivations for the convergence of the CCBHNC are given. In Section 4, a comparative study between the proposed method and two existing methods is conducted. Finally, conclusions are drawn in Section 5.

Section snippets

A contextual-constraint-based Hopfield neural cube

In general, high-level image segmentation can be considered as a clustering process, simultaneously considering the target pixel feature and its contextual information. This high-level image segmentation can be achieved through minimizing an objective function, which is composed of the global gray level and contextual information. The objective function of this purpose can be represented as follows:J=x=1Ni=1NT=1ky=1Nj=1NCx,i=Cy,j=T|gx,i−gy,j|Max(G)+yNjNy,j∈Ωx,irf(Cx,i,Cy,j),where k is

The convergence of the CCBHNC

In what follows, we will prove that the energy function of the proposed CCBHNC always decreases during network evolution. This implies that the network will converge to a stable state. Consider the energy function of the CCBHNC:E=12k=1Cz=1Cx=1Ny=1N(y,j)≠(x,i)i=1Nj=1N[Adx,i;y,jδk,z+BΦx,ip,q(y,j)(1−δk,z)]Vx,i,kVy,j,z.According to the architecture of CCBHNC, only the outputs of the neurons with the same height and the outputs of the neighboring neurons with different heights may affect the

Experimental results

To show that the proposed CCBHNC has the capability of image segmentation and robustness to noise, four cases of different modality medical images, including computer generated phantom image (Fig. 2(a)), skull-based CT (Fig. 5(a)), abdominal MRI (Fig. 6(a)), and single photon emission computed tomography (SPECT), Fig. 7(a) were tested. All the cases used for evaluating the CCBHNC were collected from the National Cheng Kung University Hospital. The MRI images were taken from the Siemen's

Conclusions

Although image processing and computer vision techniques have been developed for about 30 years, and are commonly applied for medical image processing, it is well known that their capabilities are far inferior to human vision systems. One of the main reasons is that these techniques only observe an image of individual pixel's gray value, while human vision systems observe an image from a global high-level perspective where contextual information is simultaneously included.

In this paper, we

Acknowledgements

The author wishes to thank Dr Jzau-Sheng Lin for kindly providing the image data, M.D. Ping-Hong Lai and Horng-Ming Tsai for evaluating the segmentation results, which greatly helped to improve the quality of the manuscript. This work was partly supported by NSC and Ministry of Education, Academic Excellence Grant 89-B-FA08-1-4.

References (14)

There are more references available in the full text version of this article.

Cited by (33)

  • Segmentation of abdominal organs from CT using a multi-level, hierarchical neural network strategy

    2014, Computer Methods and Programs in Biomedicine
    Citation Excerpt :

    An alternative method is to use Hopfield NN with unsupervised learning, where the segmentation procedure is formulated as a cost minimization problem [32]. For improving traditional design of Hopfield networks (i.e. architectural limitations), contextual information is incorporated in more advanced designs [9,36]. However, they could only make partial improvements over the traditional ones, since the results show that they fail at the regions where the gray level of the desired region is too close to the adjacent tissues.

  • An efficient neural network based method for medical image segmentation

    2014, Computers in Biology and Medicine
    Citation Excerpt :

    Artificial neural network (ANN) has been widely used in medical image analysis fields such as segmentation, data compression, image enhancement and noise suppression [11,12]. Multilayer perceptron (MLP), self-organizing maps (SOM), Hopfield and pulse coupled neural networks have been also utilized for medical image segmentation [13–18,29,30]. SOM network is one of the most suitable networks used for segmentation.

  • Medical image analysis with artificial neural networks

    2010, Computerized Medical Imaging and Graphics
    Citation Excerpt :

    Furthermore, neural networks have the capability of optimising the relationship between the inputs and outputs via distributed computing, training, and processing, leading to reliable solutions desired by specifications, and medical diagnosis often relies on visual inspection, and medical imaging provides the most important tool for facilitating such inspection and visualisation. Medical image segmentation and edge detection remains a common problem and foundational for all medical imaging applications [15–25]. Any content analysis and regional inspection requires segmentation of featured areas, which can be implemented via edge detection and other techniques.

  • Neuro semantic thresholding using OCR software for high precision OCR applications

    2010, Image and Vision Computing
    Citation Excerpt :

    The images are captured by the machine’s camera and transferred to a personal computer for testing. The correctness of the algorithm has been tested binarizing the votes with different algorithms and reading the results [19,20], character by character, using a commercial OCR [42]. The images of each set are depicted in Figs. 7 and 8.

View all citing articles on Scopus
View full text