Elsevier

Neurocomputing

Volume 70, Issues 13–15, August 2007, Pages 2598-2602
Neurocomputing

Letters
Stability analysis of a discrete Hopfield neural network with delay

https://doi.org/10.1016/j.neucom.2006.12.003Get rights and content

Abstract

This letter investigates convergence theorems of a DHNN with delay. We present one generalized updating rule for serial mode. The condition for convergence of a DHNN without delay can be relaxed from a symmetric matrix to a quasi-symmetric matrix. One application is presented to demonstrate the higher convergence speed of our algorithm.

Introduction

The properties of convergence are the foundation of real applications, such as in the extraction of knowledge and combinatorial optimization problems. It is well known that a discrete Hopfield neural network (DHNN) without delay has the important property that it always converges to a stable state when operating in a serial mode [1], [5]. This property is extended to a DHNN with delay in this letter. A detailed specification of the convergence of a DHNN is very important to the investigation of most real applications.

In this letter, we provide an analysis for a DHNN with delay for serial updating mode. Based on the reduction of the energy function and by means of a modified DHNN as well as its updating mode, we obtain the convergence conditions of a DHNN with delay and the updating steps. In addition, we discuss the application of the convergence of a DHNN with delay to a minimum-cut (MC) problem.

Section snippets

Hopfield Networks

Let N=(W,θ) be a DHNN with n neurons denoted by {x1,x2,,xn}. W=(wij)n×n is an n×n real matrix where wij represents the connection weight from xj to xi, and θ=(θi)n×1 is an n-dimensional real vector where θi represents the threshold attached to the neuron xi. Each neuron is assumed to have two possible states: 1 and -1. The state of the network at time t is the vector X(t)=[x1(t),x2(t),,xn(t)]T. A DHNN with delay is a computational model in which the inputs to any neuron, xi, of a DHNN with

A Comparison with the Hopfield Method

To understand the performance of a DHNN with delay, it is useful to consider a simple example from the point of view of the conditions for convergence to a stable state.

Example 1

Given a Hopfield network N=(W,θ) without delay with W=a00-0.50b-0.400.30.1c00.10.10dandθ=(θ1,θ2,θ3,θ4)T,when θ=(0,0,0,0)T, the above example will demonstrate how the network converges to reach local maxima of an energy function. Based on Theorem 2, W can be decomposed into W=W0+W1, whereW0=000.15-0.200-0.150.050.15-0.1500-0.20.05

Algorithm for a Classical Graph Problem

The theorems in this letter, imply that a neural network that is operating in a serial mode will always reach stable states that correspond to local maxima (or minima) of the energy function. To solve a problem involving the maximization of an energy function, this new network structure and updating mode enable the transformation of such a problem to a problem involving a standard bivariate energy function. Examples of suitable problems are the MC problem [3] and the feature-evaluation-index

Conclusions

In this letter we had presented one generalized updating rule for serial mode for DHNN with delay. The condition for convergence of a DHNN without delay can be relaxed to a quasi-symmetric matrix. We had discussed the application of the convergence of DHNNs with delay to the MC problem.

Acknowledgement

This work is supported by the Hong Kong Polytechnic University, Central Research Grant G-T632.

Eric C.C. Tsang (M’04) received his B.Sc. degree in Computer Studies from the City University of Hong Kong in 1990 and Ph.D. degree in computing at the Hong Kong Polytechnic University in 1996. He is an assistant professor of the Department of Computing of the Hong Kong Polytechnic University. His main research interests are in the area of fuzzy expert systems, fuzzy neural networks, Neural Networks, machine learning, genetic algorithm, rough sets, fuzzy rough sets, fuzzy support vector machine

References (5)

There are more references available in the full text version of this article.

Cited by (6)

Eric C.C. Tsang (M’04) received his B.Sc. degree in Computer Studies from the City University of Hong Kong in 1990 and Ph.D. degree in computing at the Hong Kong Polytechnic University in 1996. He is an assistant professor of the Department of Computing of the Hong Kong Polytechnic University. His main research interests are in the area of fuzzy expert systems, fuzzy neural networks, Neural Networks, machine learning, genetic algorithm, rough sets, fuzzy rough sets, fuzzy support vector machine and multiple classifier system.

Daniel S. Yeung (M’89-SM’99-F’04) received the Ph.D. degree in applied mathematics from Case Western Reserve University in 1974. In the past, he has worked as an Assistant Professor of Mathematics and Computer Science at Rochester Institute of Technology, as a Research Scientist in the General Electric Corporate Research Center, and as a System Integration Engineer at TRW. He was the chairman of the department of Computing, The Hong Kong Polytechnic University, Hong Kong. His current research interests include neural-network sensitivity analysis, data mining, Chinese computing, and fuzzy systems. He was the President of IEEE Hong Kong Computer Chapter, an associate editor for both IEEE Transactions on Neural Networks and IEEE Transactions on SMC (Part B). He has been elected as President Elect for the IEEE SMC Society. He served as a General Co-Chair of the 2002–2004 International Conference on Machine Learning and Cybernetics held annually in China, and a keynote speaker for the same Conference. He leads a group of researchers in Hong Kong and China who are actively engaging in research works on computational intelligence and data mining. His IEEE Fellow citation makes reference to his “contribution in the area of sensitivity analysis of neural networks and fuzzy expert systems”.

View full text