Elsevier

Neural Networks

Volume 15, Issues 8–9, October–November 2002, Pages 993-1003
Neural Networks

2002 Special Issue
Self-organizing maps with recursive neighborhood adaptation

https://doi.org/10.1016/S0893-6080(02)00073-4Get rights and content

Abstract

Self-organizing maps (SOMs) are widely used in several fields of application, from neurobiology to multivariate data analysis. In that context, this paper presents variants of the classic SOM algorithm. With respect to the traditional SOM, the modifications regard the core of the algorithm, (the learning rule), but do not alter the two main tasks it performs, i.e. vector quantization combined with topology preservation. After an intuitive justification based on geometrical considerations, three new rules are defined in addition to the original one. They develop interesting properties such as recursive neighborhood adaptation and non-radial neighborhood adaptation. In order to assess the relative performances and speeds of convergence, the four rules are used to train several maps and the results are compared according to several error measures (quantization error and topology preservation criterions).

Introduction

The seminal idea leading to the principle of the self-organizing maps (SOMs) can already be found in the work of von der Malsburg (von der Malsburg, 1973). The goal of his paper was to model the stochastic patterns of eye dominance and orientation preference in the visual cortex. After this biological introduction to the topic, the analysis of the mathematical properties of the topographic maps only started in the eighties when Teuvo Kohonen (Kohonen, 1982, Kohonen, 1989) simplified the biological model of von der Malsburg. Kohonen also clearly stated the famous learning rule which allows the topographic maps to organize themselves. This learning process has widely popularized the topographic maps under the names of SOMs or ‘self-organizing feature maps’ (SOFMs).

Actually, the explanation of this success can be found in the fact that SOMs are easy to understand: the task they perform is very intuitive (but, paradoxically, it is very difficult to express it with mathematical formulas). More precisely, SOMs perform simultaneously the combination of two subtasks: vector quantization and topographic representation. This ‘magic mix’ has been used not only as a vector quantization method, but also in other fields where self-organization plays a key role. For example, SOMs can be used for non-linear Blind Source Separation (Pajunen, Hyvärinen, & Karhunen, 1996), or non-linear projection of data (Kraaijveld et al., 1995, Mao and Jain, 1995). Considering SOMs as a special case of vector quantization method, the difference with more classical methods holds in some kind of predefined information given to the neurons before the learning phase. Actually, this predefined information may be seen as a mutual organization of the neurons, which affects the learning process. The result is the self-organization property which tries to reproduce the topographic organization in the quantized space.

In view of that combination of vector quantization and topology preservation, this paper studies the modifications that can be made to the classic SOM algorithm. The proposed changes only affect the ‘heart’ of the algorithm, i.e. the learning rule. The goal consists in defining new learning rules which develop the same mix of interesting properties as the traditional SOM.

After this introduction, Section 2 will review the SOM algorithm in details, as it was stated in (Kohonen, 1989). Section 3 presents some changes that can be made to the traditional SOM learning rule, without altering its fundamental properties (vector quantization and topology preservation). After a short and intuitive presentation, Section 3.1 defines a new learning rule, followed by the presentation of one of its properties in Section 3.2. Section 4 extends the new rule and generalizes it to a set of four rules. Opening the experimental part of the work, Section 5 presents the material used to evaluate practically the four rules, i.e. the map parameters, the learning sets and the error criterions. Section 6 discusses the results of the experiments from two points of view, i.e. the quantization quality and the topology preservation. Finally, Section 7 draws some conclusions, showing the advantages of the proposed rules with respect to the traditional one.

Section snippets

Review of the SOM algorithm

Basically, there are two main classes of vector quantization algorithms: the ‘winner takes all’ (WTA) methods and the ‘winner takes most’ ones. In the WTA class, only one neuron is adapted when a learning pattern stimulates the network. This sole neuron is often called the ‘best matching unit’ (BMU) and defined as the closest one from the pattern, with regards to a well specified distance measure. In more biological terms, the WTA class allows only one neuron to fire and adapt, while several

Recursive neighborhood adaptation

As mentioned in Section 1, the SOM performs the combination of two tasks: vector quantization and topographic mapping. In the original algorithm, these two subtasks are deeply interlaced in the learning rule. This is due to the fact that the learning rule adapts the neighbors of the BMU in the same way as the BMU, i.e. in a vector quantization way, radially towards to the stimulating vector. But fundamentally, there is no reason to adapt neighbors with VQ in mind, because neighbors do not play

Hybrid rules

Trying to compare the traditional learning rule and the fisherman's one is almost impossible because their nature is totally different. In order to make them comparable, one has to list their differences one by one and try each combination. Actually, there are two differences between these two learning rules: the traditional rule is non-recursive and purely radial, while the fisherman's one is recursive but not radial. This leads to four combinations shown in , , , , Table 1 and Fig. 2, Fig. 3.

Experiments

In order to evaluate the learning performance of the four rules described in Section 4, several experiments on different map configurations have been tried. 5.1 Maps, 5.2 Learning sets, 5.3 Error criterions, 5.4 Learning parameters present the maps, the learning sets, the error criterions used to evaluate the learning quality and, finally, the learning parameters.

Results and discussion

Due to the large number of experimental results and their difficulty to be interpreted, this section includes only a few tables and figures. For example, Fig. 6, Fig. 7 show the averaged results of the 10 000 maps, as a function of the learning parameters. The maps are trained with the spiral (Fig. 6) and with the half-cube (Fig. 7); after a random initialization, each map ran during 2 (Fig. 6) or 5 epochs (Fig. 7), with randomly chosen learning parameters. The whole set of numerical results

Conclusion

This paper shows on simple examples that some changes can be made to the traditional SOM learning rule. Indeed, well chosen modifications do not alter the subtle mix of vector quantization and topographic mapping performed by the traditional SOM algorithm. On the contrary, the proposed learning rules try to supervise each subtask as independently as possible, giving the possibility to put the emphasis on one or the other subtask of the SOMs.

From a theoretical point of view, the proposed rules

Acknowledgements

This work was realized with support of the ‘Ministère de la Région wallonne’, under the ‘Programme de Formation et d'Impulsion à la Recherche Scientifique et Technologique’. The authors wish to thank the reviewers for their judicious and helpful comments.

References (18)

  • H.-U. Bauer et al.

    Neural maps and topographic vector quantization

    Neural Networks

    (1999)
  • H.-U. Bauer et al.

    Quantifying the neighborhood preservation of self-organizing maps

    IEEE Transactions on Neural Networks

    (1992)
  • P. Demartines et al.

    Curvilinear component analysis: A self-organizing neural network for non linear mapping of data sets

    IEEE Transactions on Neural Networks

    (1997)
  • E.W. Dijkstra

    A note on two problems in connection with graphs

    Numerical Mathematics

    (1959)
  • Goodhill, G. J., & Sejnowski, T. J. (1996). Quantifying neighbourhood preservation in topographic mappings. Proceedings...
  • I.T. Jolliffe

    Principal component analysis

    (1986)
  • D.G. Jones et al.

    A computationnal model for the overall pattern of ocular dominance

    Journal of Neurosciences

    (1991)
  • Kiviluoto, K. (1996). In P. IEEE Neural Networks Council (Ed.), Topology preservation in self-organizing maps (Vol. 1)...
  • T. Kohonen

    Self-organization of topologically correct feature maps

    Biological Cybernetics

    (1982)
There are more references available in the full text version of this article.

Cited by (0)

1

Michel Verleysen is senior Research Associate of the Belgian National Fund for Scientific Research (FNRS).

View full text