Elsevier

Neural Networks

Volume 33, September 2012, Pages 216-227
Neural Networks

A Competitive Layer Model for Cellular Neural Networks

https://doi.org/10.1016/j.neunet.2012.05.005Get rights and content

Abstract

This paper discusses a Competitive Layer Model (CLM) for a class of recurrent Cellular Neural Networks (CNNs) from continuous-time type to discrete-time type. The objective of the CLM is to partition a set of input features into salient groups. The complete convergence of such networks in continuous-time type has been discussed first. We give a necessary condition, and a necessary and sufficient condition, which allow the CLM performance existence in our networks. We also discuss the properties of such networks of discrete-time type, and propose a novel CLM iteration method. Such method shows similar performance and storage allocation but faster convergence compared with the previous CLM iteration method (Wersing, Steil, & Ritter, 2001a). Especially for a large scale network with many features and layers, it can significantly reduce the computing time. Examples and simulation results are used to illustrate the developed theory, the comparison between two CLM iteration methods, and the application in image segmentation.

Introduction

For human being’s visual perception, perceptual grouping can be defined as the ability to detect structural layout of visual objects. This phenomenon was first studied in the 1920s by the Gestalt school of psychology and one of their important theories is the Gestalt law (Koffka, 1962). By virtue of some Gestalt laws, such as proximity, symmetry, and continuity, human can detect groups in a set of objects. In computer vision, this grouping process can be considered as a procedure for feature binding, which aims at binding some related features into common groups, so as to separate those groups originating from different features (von der Malsburg, 1981, von der Malsburg, 1995).

The Competitive Layer Model (CLM) was first proposed to solve spatial feature binding and sensory segmentation problems by Ontrup and Ritter (1998) and Ritter (1990). This model is based on the combination of competitive and cooperative processes in a recurrent neural network (RNN) architecture, which can partition a set of input features into salient groups. Due to competitive interactions among layers, each feature is unambiguously assigned to one layer and feature binding is achieved by a collection of competitive layers. Wersing et al. (2001a) designed a continuous-time CLM RNN with linear threshold (LT) neurons for feature binding and sensory segmentation. S. Weng et al. also proposed a hybrid learning method based on CLM (Weng, Wersing, Steil, & Ritter, 2006). In Yi (2010), Zhang Yi proposed to use continuous-time Lotka–Volterra recurrent neural networks to implement the CLM, and proved that the set of stable attractors of such networks equals the set of minimum points of the CLM energy function in the nonnegative orthant. However, most of them focus on studying the CLM properties of different networks, or exploring possible application field, finding a CLM algorithm more efficiently is still a challenge.

The CLM networks can be considered as a kind of multistable Winner-Take-All (WTA) networks. A multistable network can have multiple stable equilibriums, while a mono-stable one has always only one stable equilibrium. Those traditional WTA neural networks are almost mono-stable, in which only one neuron among all neurons can be the final ‘winner’. The multistability property can provide an interesting way to mediate WTA competition between groups of neurons, so the final winner will be a group of neurons. More discussion about multistability can been found in Hahnloser (1998), Hahnloser, Sarpeshkar, Mahowald, Douglas, and Seung (2000), Hahnloser, Sebastian, and Slotine (2003), Wersing, Beyn, and Ritter (2001b), Xie, Hahnloser, and Seung (2002), Yi and Tan (2004b), Yi, Tan, and Lee (2003), Zhang, Yi, and Yu (2008) and Zhang, Yi, Zhang, and Heng (2009).

In previous CLM work on LT RNN (Wersing et al., 2001a), an asynchronous CLM iteration method was proposed based on the convergence proof in Feng (1997). Because the whole network updates only one neuron status each time, this assumption makes iterations time consuming, especially for a large scale network. Therefore, a synchronous CLM iteration method would be helpful to solve this problem. In Zhou and Zurada (2010), we proposed a novel synchronous CLM iteration method, which has similar performance and storage allocation but faster convergence compared with the previous asynchronous CLM iteration method. In this paper, we extend CLM to Cellular Neural Networks (CNNs) and significantly improve its efficiency.

The CNN model was first proposed by Chua and Yang (1988b). Since then, CNNs have been widely studied both in theory and applications (Hänggi, 2000, Slavova, 2003). Some stability analysis about the standard CNNs can been found in Chua and Yang, 1988a, Chua and Yang, 1988b, Wu and Chua (1997) and Yi and Tan (2004a). Because popular nonlinear qualitative analysis methods always require the activation function to be differentiable, and the CNN neuron activation function is continuous but non-differentiable, the qualitative analysis of CNNs turns out to be difficult.

In this paper, we first discuss a class of continuous-time recurrent CNNs based on CLM (CLM-CNN). We prove that such networks can be completely stable. According to the qualitative analysis of our model, we first prove that there is no stable equilibrium in the interior of the network outputs’ range. Through discussing the equilibrium existence condition on the ±1 boundaries of the CNN neuron activation function, we present a necessary condition for producing feature binding phenomena. Furthermore, by using the subspace method noted in Wersing et al. (2001a), we give a sufficient and necessary condition.

We also discuss the properties of discrete-time type networks and propose a synchronous CLM iteration method based on previous work in Zhou and Zurada (2010). This discrete-time CLM-CNN can have the same solution space for steady states as the continuous-time one under some conditions. Compared with another asynchronous CLM iteration method in Wersing et al. (2001a), our method has similar performance and storage allocation but is less time consuming. Especially for a large scale network with many features and layers, it can greatly reduce the computing time.

The rest of this paper is organized as follows: The architecture of the proposed continuous-time CLM-CNN is described in Section 2. Preliminaries are given in Section 3. In Section 4, a theoretical analysis of the network is given, which includes: the complete stability of the proposed network, a necessary condition and a sufficient and necessary condition for feature binding. The discussion about the discrete-time CLM-CNN and the iteration method can be found in Section 5. Simulations and illustrative examples are presented in Section 6. Conclusions are given in Section 7. The details of the theoretical analysis and the iteration method about discrete-time CLM-CNN can be found in Appendix.

Section snippets

CLM for continuous-time Cellular Neural Networks

The CLM consists of a set of l layers of feature-selective neurons, and each layer contains n neurons (see in Fig. 1). There are two kinds of interactions in the model: the vertical WTA interaction and the lateral interaction. Noted here, xiα is the activity of a neuron at position i in layer α, and a column i denotes the set of the neuron activities xiα,α=1,,l and i=1,,n, that share a common position i in each layer. All neurons in a column i are equally driven by an external input. More

Preliminaries

In this section, we provide preliminaries which will be used in the following to establish our theories.

Definition 1

A vector x=(x11,,xnl)Rnl is called an equilibrium point (fixed point) of network (1), if it satisfies xiα+σ(hJβ=1lxiβ+1Jj=1nfijxjα+xiα)0 for all i=1,,n, and α=1,,l.

Definition 2

Let x(t,x0) be a trajectory of (1). An equilibrium point is said to be stable (in the sense of Lyapunov) if the following statement is true: if for any ε>0, there exists a δ>0 such that x0xδ implies that x(t,x0

Theory proof

In this section, conditions guaranteeing the complete stability of the network (1) are presented in Lemma 1. We also present a necessary condition in Theorem 1 to let our networks have CLM phenomena. Furthermore, a sufficient and necessary condition for CLM phenomena is given in Theorem 2.

In order to apply the CLM in the CNN, we first need to force the network to be completely stable, which means that every trajectory of the CNN converges to an equilibrium point.

Lemma 1

Suppose matrix f is symmetric,

CLM for discrete-time Cellular Neural Networks

There are two main approaches to study the relationship between the discrete-time system and the continuous-time one. The first one is to transform the system into a corresponding deterministic continuous-time system, which is based on a fundamental theorem of stochastic approximation theory (Ljung, 1977). To use this fundamental theorem of stochastic approximation, some crucial conditions must be satisfied, just like the roundoff limitation and tracking requirements. Additional important

Simulation

In this section, we provide examples of simulation results to illustrate and verify the theory developed. Almost all programs, which were coded in MATLAB 2008a, were run in a PC with 1 Intel i7 [email protected] GHz CPU, 6 GB RAM, and Windows Vista Ultimate Service Pack 1 64-bit operation system. We also provide the original code of Example 5 (Computational Intelligence Laboratory).

Example 1

Consider a CLM neural network with 3 layers with 4 neurons in each layer ẋ(t)=x(t)+σ(Wcx(t)+H+x(t)), where Wc=1J[fI3Π3I

Conclusions

In this paper, we investigate the Competitive Layer Model for a class of cellular recurrent neural networks from continuous-time type to discrete-time type. We establish the complete stability of both networks. In order to define the feature binding phenomena for the discussed class of networks, we present necessary conditions and sufficient and necessary conditions. In addition, we outline the analysis of some dynamic properties of our model. We also give a novel synchronous CLM-CNN iteration

Acknowledgments

The authors wish to thank Yonglin Zeng, Dr. Ping Li, Artur Abdullin, Dr. Dongqin Chen, Dr. Lijun Zhang, and Minqing Zhang for their useful discussions and comments. We also want to thank those anonymous reviewers for their suggestions.

References (30)

  • Richard H.R. Hahnloser

    On the piecewise analysis of networks of linear threshold neurons

    Neural Networks

    (1998)
  • L.O. Chua et al.

    Cellular neural networks: application

    IEEE Transactions on Circuits and Systems

    (1988)
  • L.O. Chua et al.

    Cellular neural networks: theory

    IEEE Transactions on Circuits and Systems

    (1988)
  • Computational Intelligence Laboratory, Electrical and University of Louisville Computer Engineering....
  • Jianfeng Feng

    Lyapunov functions for neural nets with nondifferentiable input–output characteristics

    Neural Computation

    (1997)
  • Richard H.R. Hahnloser et al.

    Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit

    Nature

    (2000)
  • Richard H.R. Hahnloser et al.

    Permitted and forbidden sets in symmetric threshold-linear networks

    Neural Computation

    (2003)
  • Martin Hänggi et al.

    Cellular neural networks: analysis, design and optimization

    (2000)
  • Roger A. Horn et al.

    Matrix analysis

    (1985)
  • David B. Kirk et al.
  • Kurt Koffka

    Principles of gestalt psychology

    (1962)
  • Lennart Ljung

    Analysis of recursive stochastic algorithms

    IEEE Transactions on Automatic Control

    (1977)
  • Ontrup, Jörg, & Ritter, Helge 1998. Perceptual grouping in a neural model: Reproducing human texture perception,...
  • Helge Ritter

    A spatial approach to feature linking

  • Angela Slavova

    Cellular neural networks: dynamics and modelling

    (2003)
  • Cited by (0)

    The research reported here was supported by National Science Foundation of China under Grant 61105061, the Fundamental Research Funds for the Central Universities, Southwest University for Nationalities (11NZYTD04), National Science Foundation of China under Grant 60973070, and the National Research Foundation for the Doctoral Program of Higher Education of China (20090185120009).

    View full text