Contributed ArticleBasis function models of the CMAC network
Introduction
The Cerebellar Model Articulation Controller (CMAC) was introduced by Albus, 1971, Albus, 1975a, Albus, 1975b, Albus, 1979, who, concurrently with Marr (1969), developed a functional model of the mammalian cerebellum. The model takes advantage of the high degree of regularity present in the organization of the cerebellar cortex and offers numerous advantages from the implementational point of view. Furthermore, the network is inherently dependent on its adjustable parameters in a linear way (which makes it attractive where the training is concerned), and so well-understood linear algorithms (such as least mean squares (LMS)) are applicable. The CMAC network has become especially popular in the areas of robotics and control where the real-time capabilities of the network are of particular importance (Miller et al., 1990; Tolle and Ersü, 1992). Although a large portion of the reported results concerning CMAC focuses on employing the network in practical applications, several more rigorous theoretical analyses of the CMAC mapping have also been considered (Parks and Militzer, 1989; Cotter and Guillerm, 1992), and some constraints on the classes of nonlinear mappings realizable by this network have been identified (Brown et al., 1993).
It has also been pointed out (Kołcz, 1996) that CMAC belongs to a wider class of neural network architectures, which has been recently introduced as the General Memory Neural Network (GMNN) (Kołcz and Allinson, 1995). In particular, networks of this class can be interpreted as particular variants of basis function architectures (e.g. radial basis function (RBF) (Broomhead and Lowe, 1988; Powell, 1992) and kernel regression (KR) (Härdle, 1990; Specht, 1991) networks), which provides additional insight into the properties of their mapping. In (Kołcz and Allinson, 1995), we suggested that networks of the GMNN type whose structure is particularly regular could be modelled by basis function networks, with the basis function being an estimated (or average) version of the basis characteristic of the particular GMNN variant. In this paper, we propose such a representation of the CMAC network and demonstrate its usefulness in predicting and analysing the network behaviour. The paper is organized as follows: In Section 2 we introduce the general concept of CMAC mapping, particularly in the context of its equivalence with the GMNN architecture. Section 3 discusses the input-space quantization performed by CMAC, with emphasis being placed on the case of uniform quantization; standard and modified versions of CMAC encoding are considered. In Section 4 the expected form of the CMAC address distance is derived (for the uniform quantization case) and compared with experimental data. Section 5 provides performance comparison between the CMAC network and its basis-function model on a case problem of chaotic time-series prediction. The paper is concluded in Section 6.
Section snippets
General structure of CMAC mapping
The function of CMAC has its roots in the operation of a biological cerebellar cortex and has an appeal due to its intuitiveness and simplicity. Essentially, CMAC covers the input space with a number of overlapping `sensors' or `association cells', such that each sensor is active for points within a certain small region of the input space and a fixed number of K sensors is activated by any given input (see Fig. 1). Any particular input to the network generates a response in the form of the
The structure of CMAC quantization
Each of the K nodes of a CMAC network performs a variant of scalar-product (vector) quantization of the input space—that is, each of the D components of an input vector is quantized individually, which results in hyper-rectangular quantization cells oriented along the coordinate axes in D. Each of the K vector quantizers has D scalar components (one per input-vector dimension), and conversely, each coordinate of an input vector is quantized by K scalar quantizers (one per network node). The
Derivation of CMAC address distance and proximity functions
It can be seen that addresses generated by CMAC for two input points, x and y, will be determined by the quantization matrices Q(x) and Q(y) produced for these two points. In particular, the kth components of the address vector generated for x and y will be identical as long the kth columns of Q(x) and Q(y) are the same. On the other hand, if the kth network quantizer produces different outputs for x and y for at least one dimension, then Q(x) and Q(y)—and hence A(x) and A(y)—will be different.
Problem setting
To compare the performance of CMAC and its model (with expected address proximity as the basis function) we considered the problem of predicting the chaotic Mackey-Glass series, which has received much attention in the neural network community (Lapedes and Farber, 1987; Moody and Darken, 1989; Moody, 1989). The series arises as a solution to the following difference-delay equationwhere both t and τ are integers. When the parameters a and b are set to a=0.2
Conclusions
The quantization performed by the CMAC network has been described in the context of equivalence between CMAC and the more general GMNN architecture. Particular emphasis was placed on the case of uniform quantization, which is particularly regular and amenable to formal analysis.
In particular, we have derived the formula for the expected address distance function of the CMAC network and shown how it can be used to create an approximate basis-function model of this architecture. The closed-form
Acknowledgements
One of the authors (AK) would like to thank the Overseas Research Foundation, York University and UMIST for supporting this research.
References (38)
A theory of cerebellar function
Mathematical Biosciences
(1971)Mechanisms of planning and problem solving in the brain
Mathematical Biosciences
(1979)Estimating the embedding dimension
Physica D
(1991)- et al.
The interpolation capabilities of the binary CMAC
Neural Networks
(1993) - et al.
The CMAC and a Theorem of Kolmogorov
Neural Networks
(1992) - et al.
N-tuple regression network. Neutral Networks
(1996) - et al.
A comparison of five algorithms for the training of CMAC memories for learning control systems
Automatica
(1992) Data storage in the cerebellar model articulation controller (CMAC)
Journal of Dynamic Systems, Measurement and Control
(1975)A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC)
Journal of Dynamic Systems, Measurement and Control
(1975)- Bartels, R.H., Beatty, J.C., & Barsky, B.A. (1987). An introduction to splines for use in computer graphics and...
Multivariable functional interpolation and adaptive networks
Complex Systems
On the convergence of the Albus perceptron
IMA Journal of Mathematics Control and Information
Predicting chaotic series
Physical Review Letters
Regularization theory and neural network architectures
Neural Computation
Cited by (8)
FCMAC-EWS: A bank failure early warning system based on a novel localized pattern learning and semantically associative fuzzy neural network
2008, Expert Systems with ApplicationsCitation Excerpt :It is capable of performing localized generalization with very fast learning. Networks of this class can be interpreted as variants of basis function architectures (e.g. radial basis function (RBF) and kernel regression) (Kołcz & Allinson, 1999). The major advantage of the CMAC networks is the ability to generalize with good learning behavior due to its local weight adjustment that exhibits faster learning speed and easier hardware implementation.
Fuzzy CMAC structures
2009, IEEE International Conference on Fuzzy SystemsCMAC neural networks structures
2009, Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRAAspects of the binary CMAC: Unimodularity and probabilistic reconstruction
2005, Neural Processing LettersCompound compensator based on cerebellar model articulation controller for surplus torque in passive electric loading system
2004, Tongji Daxue Xuebao/Journal of Tongji UniversityAn alternative CMAC trained with genetic algorithms to solve an instable control process
2004, Proceedings - 2004 IEEE International Conference on Robotics and Biomimetics, IEEE ROBIO 2004