Elsevier

Neurocomputing

Volumes 44–46, June 2002, Pages 515-520
Neurocomputing

Quantitative analysis of kernel properties in Kohonen's self-organizing map algorithm: Gaussian and difference of Gaussians neighborhoods

https://doi.org/10.1016/S0925-2312(02)00410-1Get rights and content

Abstract

Recent experimental evidence suggests that processes of plastic reconfiguration of the primate cerebral cortex may involve an inhibitory component. Here, we modify Kohonen's self-organizing map model of the cortex to include surround inhibition in its adaptation kernel. This addition not only improves the accuracy of the cortical representation, as measured by quantization error, it also tends to produce pinwheel patterns, similar to those observed in primary visual cortex.

Introduction

In decades following the proliferation of cortical microelectrode recording studies in the 1960s, neuroscience has accumulated a detailed catalog of cellular properties and organization for different cortical regions [7]. For example, primary visual cortex is characterized by a preponderance of neurons with linear receptive field properties [4], that are organized locally into orientation-specific pinwheels [3], and globally assembled into a single retinotopic map [10]. Thus, it was of major importance to learn that all three characteristic levels of visual cortex organization reemerge in auditory cortex when visual input is surgically redirected to that area [9].

It seems highly unlikely that these organizational properties are genetically determined. Rather, such evidence suggests a new view of cortical organization in which these properties are informationally induced. Specifically, we propose that all cerebral cortical areas utilize the same knowledge-seeking, unsupervised learning algorithm to extract information from the unique particular set of afferent signals that they receive. The characteristic and unique cellular and organization properties of individual cortical areas would arise from the interaction of the specific structure of the afferent information available to an area with the common knowledge-seeking algorithm. We have termed this idea the neuronal empiricism hypothesis [2].

The interesting questions raised by the hypothesis are, first, what is that algorithm that all cortex employs, and second, how well can it explain the particular functional characteristics of the quantitatively well-characterized cortical regions. Such computational neurobiological models embody scientific hypotheses in their most explicit, exact, and testable form, capable of complete representation in a set of equations or a computer program.

Here, we examine two variations of one promising computational model of the primate cerebral cortex, Kohonen's self-organizing map algorithm [5]. Aziz-Zadeh and Beatty [1] and Obermayer et al. [8] have demonstrated previously that this simple algorithm generates a two-dimensional structure similar to that seen in the hand area of primary somatosensory cortex (area 3b) when presented with simulated data like that provided by the primate hand to the primate somatosensory cortex. These studies have utilized the conventional form of Kohonen's algorithm in which units of the cortical map in the region of the best-fitting cortical unit are adapted to become more similar to the sensory data on each iteration of the algorithm.

However, experimental data suggest that active inhibitory processes may also be involved in dynamic cortical organization. Wang, for example, had observed a segregation of patches representing the dorsal surface of the hand in an experiment in which portions of the ventral finger pads were simultaneously stimulated [11]. These data suggest the presence of an active inhibitory process isolating regions being modified by Hebbian conditioning. Thus, we explore both the traditional Gaussian SOM adaptation kernel and a new difference of Gaussians (DOG) kernel that introduces an inhibitory surround into the adaptation process.

Section snippets

Methods

An intrinsic inhibitory component to Kohonen's self-organizing map algorithm can be produced elegantly by using a difference of Gaussians adaptation kernel [6]. This kernel has a central excitatory component and a peripheral inhibitory component, which are represented simply as a difference of two Gaussian distributions of differing variance, the narrower one being excitatory and the broader one inhibitory:hciEα(t)exp||mc−mi||2E2(t)−γIα(t)exp||mc−mi||2I2(t),where γE and γI are the

Results and discussion

There were two surprising results that emerged from these simulations, which were designed to explore the emergent properties of the DOG adaptation kernel. The first was that the DOG often produced a model that more accurately fitted the sensory input data than did Kohonen's original Gaussian adaptation kernel. This finding was completely unexpected, since there was no attempt to improve the model's fit in any direct manner.

The robustness of this finding may be seen in Table 1,which reveals a

Acknowledgements

Special thanks are due to K. Chau, R. Ly, and K. Nguyen for their help.

References (11)

There are more references available in the full text version of this article.

Cited by (0)

This project is supported by the Human Brain Project under NIMH Grant 1K07MH01953, NIDOCD Grant DC/8424559, and NSF Grant MH/8429337.

View full text