Elsevier

Neural Networks

Volume 12, Issue 9, November 1999, Pages 1207-1212
Neural Networks

Contributed article
The asymptotic memory capacity of the generalized Hopfield network

https://doi.org/10.1016/S0893-6080(99)00042-8Get rights and content

Abstract

This paper presents a theoretical analysis on the asymptotic memory capacity of the generalized Hopfield network. The perceptron learning scheme is proposed to store sample patterns as the stable states in a generalized Hopfield network. We have obtained that (n−1) and 2n are a lower and an upper bound of the asymptotic memory capacity of the network of n neurons, respectively, which shows that the generalized Hopfield network can store the larger number of sample patterns than Hopfield network.

Introduction

When Hopfield network was proposed as an associative memory model in 1982, the sum-of-outer product scheme was applied to store the sample patterns (Hopfield, 1982). Hopfield demonstrated by computer simulation that the network with n neurons could store about 0.15n patterns in the form of the stable states. It is now well known that the asymptotic memory capacity of Hopfield network with n neurons is n/(4logn) patterns (McMliece et al., 1987).

Hopfield network is a single layer recurrent network of n bipolar (or binary) neurons uniquely defined by (W,θ) where W is a symmetric zero-diagonal real weight matrix, and θ is a real threshold vector. If the weight matrix is changed to be an asymmetric and zero-diagonal one, the network is usually called an asymmetric Hopfield network. In this paper, we define a generalized Hopfield network (GHN) to be such kind of a network with a general (asymmetric or symmetric) and zero-diagonal real weight matrix.

Recent researches (Ma, 1997) show that the GHN having stable states can be stable in the same way as a Hopfield network. Thus it is possible to apply this neural architecture to associative memory with some learning scheme which enables a set of prescribed patterns as the stable states of a GHN. Moreover, several such learning schemes on the GHNs for associative memory have already been established (see, e.g. Gardner, 1988, Wang et al., 1993). However, the memory capacity of the GHN with any learning scheme has not been investigated in depth. From the literature of neural networks, the following theoretical results are related to the memory capacity of the GHN.

Abu-Mostafa and Jacques (1985) defined the memory capacity as the maximal number of arbitrary state patterns that can be stable in a GHN of n neurons and proved1 that it is bounded by n. In fact, this deterministic definition of memory capacity is too strict since we can easily verify that any pair of the two state patterns with one Hamming distance cannot be stable in any GHN. Therefore the memory capacity defined by this deterministic formulation is insignificant and the obtained bound is loose and useless.

The other way to define the memory capacity of some kind of neural network (with some learning scheme) is via the probability sequence P(m,n) that m random state patterns can be stable in a choice of the neural network of n neurons (by the learning scheme). Venkatesh and Psaltis (1989) defined a function C(n) as the (asymptotic) memory capacity if, and only if, for every λ∈(0,1), as n→∞, P(m,n) approaches one whenever m≤(1−λ)C(n), and zero whenever m≥(1+λ)C(n). By this definition, they found that C(n)=2n is the asymptotic memory capacity of the recurrent network defined by a general weight matrix and a threshold vector (Venkatesh, 1987, Venkatesh and Psaltis, 1989). In a special case that the threshold vector is zero, it was proved that C(n)=n under each of the spectral strategies (Venkatesh and Psaltis, 1989). Obviously, the recurrent network is also a generalization of Hopfield network, but the diagonal elements of the weight matrix are not necessarily zero. Thus it is different from the GHN defined in this paper.

Here we prefer the model of the GHN that remains wii=0 for the two reasons: (1) It is proved that the GHN with nonnegative weights is stable in randomly asynchronous mode and it is also shown by simulation experiments that almost any GHN having stable states is stable in randomly asynchronous mode (Ma, 1997). By these results, we can consider that the GHN maintains some important properties of the stability of Hopfield network for associative memory. (2) When wii is restricted to be zero, the network is easy to be implemented for the applications.

However, the restriction that wii=0, actually brings the difficulty on solving the asymptotic memory capacity and we cannot use the results obtained by Venkatesh and Psaltis (1989). In this paper we will use a method of combinatorial analysis to study the asymptotic memory capacity of the GHN.

The main contribution of this paper is obtaining lower and upper bounds of the asymptotic memory capacity of the GHN. In Section 2, we will propose the main theorem after a brief description of the GHN and the perceptron learning scheme. The proof of the main theorem is given in Section 3. A brief conclusion is given in Section 4.

Section snippets

The main theorem

We first give the mathematical model of a GHN. A GHN is composed of n interconnected neurons with (W,θ) where W is an n×n zero-diagonal matrix with element wi,j denoting the weight on the connection from neuron j to neuron i; and θ is a vector of dimension n with component θi denoting the threshold of neuron i. For simplicity, we let θi=0, i=1,2,…,n in this paper.

Every neuron can be in one of two possible states, either 1 or −1. The state of neuron i at time t is denoted by xi(t). The state of

The proof of the main theorem

In this section, we will prove the main theorem. The basic difficulty to prove the theorem comes from the fact that W must have zero diagonal. When wii is not necessarily zero, the proof of relation (i) is closely related to the question of computing the Vapnik dimension of the linear classifier (Pollard, 1989). In this case, things are not difficult (linear separability of n−1 bipolar vectors in Rn). But when wii=0, we arrive at the problem of checking the linear separability of a dichotomy of

Conclusion

In this paper, we have studied the asymptotic memory capacity of the generalized Hopfield network with the perceptron learning scheme. It is proved that (n−1) and 2n are the lower and upper bounds of the asymptotic memory capacity of the generalized Hopfield network of n neurons. By this lower bound (n−1), we have that the asymptotic memory capacity (function) of the GHN is much greater than that of Hopfield network with the sum-of-outer product learning scheme. Therefore the GHN has the high

Acknowledgements

This work was supported by China Tianyuan Funds under Project 19376015.

References (12)

  • J. Ma

    The stability of the generalized Hopfield networks in randomly asynchronous mode

    Neural Networks

    (1997)
  • Y.S. Abu-Mostafa et al.

    Information capacity of Hopfield model

    IEEE Transactions on Information Theory

    (1985)
  • T.M. Cover

    Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition

    IEEE Transactions on Electronic Computer

    (1965)
  • E. Gardner

    The space of interactions in neural network models

    Journal of Physics A: Mathematical and General

    (1988)
  • J.J. Hopfield

    Neural networks and physical system with emergent collective computational abilities

    Proceedings of the National Academy of Science, USA

    (1982)
  • J. Komlos

    On determinant of (0,1) matrices

    Studia Science Mathematics Hungarica

    (1967)
There are more references available in the full text version of this article.

Cited by (0)

View full text