Elsevier

Neural Networks

Volume 12, Issue 1, January 1999, Pages 107-126
Neural Networks

Contributed Article
Basis function models of the CMAC network

https://doi.org/10.1016/S0893-6080(98)00113-0Get rights and content

Abstract

An interpretation of the Cerebellar Model Articulation Controller (CMAC) network as a member of the General Memory Neural Network (GMNN) architecture is presented. The usefulness of this approach stems from the fact that, within the GMNN formalism, CMAC can be treated as a particular form of a basis function network, where the basis function is inherently dependent on the type of input quantization present in the network mapping. Furthermore, considering the relative regularity of input-space quantization performed by CMAC, we are able to derive an expected (or average) form of the basis function characteristic of this network. Using this basis form, it is possible to create basis-functions models of CMAC mapping, as well as to gain more insight into its performance. The developments are supported by numerical simulations.

Introduction

The Cerebellar Model Articulation Controller (CMAC) was introduced by Albus, 1971, Albus, 1975a, Albus, 1975b, Albus, 1979, who, concurrently with Marr (1969), developed a functional model of the mammalian cerebellum. The model takes advantage of the high degree of regularity present in the organization of the cerebellar cortex and offers numerous advantages from the implementational point of view. Furthermore, the network is inherently dependent on its adjustable parameters in a linear way (which makes it attractive where the training is concerned), and so well-understood linear algorithms (such as least mean squares (LMS)) are applicable. The CMAC network has become especially popular in the areas of robotics and control where the real-time capabilities of the network are of particular importance (Miller et al., 1990; Tolle and Ersü, 1992). Although a large portion of the reported results concerning CMAC focuses on employing the network in practical applications, several more rigorous theoretical analyses of the CMAC mapping have also been considered (Parks and Militzer, 1989; Cotter and Guillerm, 1992), and some constraints on the classes of nonlinear mappings realizable by this network have been identified (Brown et al., 1993).

It has also been pointed out (Kołcz, 1996) that CMAC belongs to a wider class of neural network architectures, which has been recently introduced as the General Memory Neural Network (GMNN) (Kołcz and Allinson, 1995). In particular, networks of this class can be interpreted as particular variants of basis function architectures (e.g. radial basis function (RBF) (Broomhead and Lowe, 1988; Powell, 1992) and kernel regression (KR) (Härdle, 1990; Specht, 1991) networks), which provides additional insight into the properties of their mapping. In (Kołcz and Allinson, 1995), we suggested that networks of the GMNN type whose structure is particularly regular could be modelled by basis function networks, with the basis function being an estimated (or average) version of the basis characteristic of the particular GMNN variant. In this paper, we propose such a representation of the CMAC network and demonstrate its usefulness in predicting and analysing the network behaviour. The paper is organized as follows: In Section 2 we introduce the general concept of CMAC mapping, particularly in the context of its equivalence with the GMNN architecture. Section 3 discusses the input-space quantization performed by CMAC, with emphasis being placed on the case of uniform quantization; standard and modified versions of CMAC encoding are considered. In Section 4 the expected form of the CMAC address distance is derived (for the uniform quantization case) and compared with experimental data. Section 5 provides performance comparison between the CMAC network and its basis-function model on a case problem of chaotic time-series prediction. The paper is concluded in Section 6.

Section snippets

General structure of CMAC mapping

The function of CMAC has its roots in the operation of a biological cerebellar cortex and has an appeal due to its intuitiveness and simplicity. Essentially, CMAC covers the input space with a number of overlapping `sensors' or `association cells', such that each sensor is active for points within a certain small region of the input space and a fixed number of K sensors is activated by any given input (see Fig. 1). Any particular input to the network generates a response in the form of the

The structure of CMAC quantization

Each of the K nodes of a CMAC network performs a variant of scalar-product (vector) quantization of the input space—that is, each of the D components of an input vector is quantized individually, which results in hyper-rectangular quantization cells oriented along the coordinate axes in RD. Each of the K vector quantizers has D scalar components (one per input-vector dimension), and conversely, each coordinate of an input vector is quantized by K scalar quantizers (one per network node). The

Derivation of CMAC address distance and proximity functions

It can be seen that addresses generated by CMAC for two input points, x and y, will be determined by the quantization matrices Q(x) and Q(y) produced for these two points. In particular, the kth components of the address vector generated for x and y will be identical as long the kth columns of Q(x) and Q(y) are the same. On the other hand, if the kth network quantizer produces different outputs for x and y for at least one dimension, then Q(x) and Q(y)—and hence A(x) and A(y)—will be different.

Problem setting

To compare the performance of CMAC and its model (with expected address proximity as the basis function) we considered the problem of predicting the chaotic Mackey-Glass series, which has received much attention in the neural network community (Lapedes and Farber, 1987; Moody and Darken, 1989; Moody, 1989). The series arises as a solution to the following difference-delay equationx(t+1)−x(t)=−bx(t)+ax(t−τ)1+x(t−τ)10where both t and τ are integers. When the parameters a and b are set to a=0.2

Conclusions

The quantization performed by the CMAC network has been described in the context of equivalence between CMAC and the more general GMNN architecture. Particular emphasis was placed on the case of uniform quantization, which is particularly regular and amenable to formal analysis.

In particular, we have derived the formula for the expected address distance function of the CMAC network and shown how it can be used to create an approximate basis-function model of this architecture. The closed-form

Acknowledgements

One of the authors (AK) would like to thank the Overseas Research Foundation, York University and UMIST for supporting this research.

References (38)

  • Bledsoe, W., & Browning, I. (1959). Pattern recognition and reading by machine. IRE Joint Computer Conference, p....
  • D.S. Broomhead et al.

    Multivariable functional interpolation and adaptive networks

    Complex Systems

    (1988)
  • D. Ellison

    On the convergence of the Albus perceptron

    IMA Journal of Mathematics Control and Information

    (1988)
  • J.D. Farmer et al.

    Predicting chaotic series

    Physical Review Letters

    (1987)
  • F. Girosi et al.

    Regularization theory and neural network architectures

    Neural Computation

    (1995)
  • Gersho, A., & Gray, R.M. (1992). Vector quantization and signal compression. Boston: Kluwer...
  • Hardle, W. (1990). Applied nonparametric regression. Cambridge: Cambridge University...
  • Karczmarz, S. (1937). Angenäherte auflösung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sic. Let., Cl....
  • Kołcz, A. (1996). Approximation properties of memory-based artificial neural networks. PhD thesis, University of...
  • Cited by (8)

    • FCMAC-EWS: A bank failure early warning system based on a novel localized pattern learning and semantically associative fuzzy neural network

      2008, Expert Systems with Applications
      Citation Excerpt :

      It is capable of performing localized generalization with very fast learning. Networks of this class can be interpreted as variants of basis function architectures (e.g. radial basis function (RBF) and kernel regression) (Kołcz & Allinson, 1999). The major advantage of the CMAC networks is the ability to generalize with good learning behavior due to its local weight adjustment that exhibits faster learning speed and easier hardware implementation.

    • Fuzzy CMAC structures

      2009, IEEE International Conference on Fuzzy Systems
    • CMAC neural networks structures

      2009, Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA
    • An alternative CMAC trained with genetic algorithms to solve an instable control process

      2004, Proceedings - 2004 IEEE International Conference on Robotics and Biomimetics, IEEE ROBIO 2004
    View all citing articles on Scopus
    View full text