Abstract
In this article, we propose a model to self-organize a map for robot navigation using its own visual information. The robot is assumed to have visual sensors around it. The recognition model is based on Kohonen’s self-organizing map (SOM), which was proposed as a model of the self-organization of a cortex. An ordinary SOM consists of a two-dimensional array of neuron-like feature detector units. We want to extract the direction and position information separately from the visual input, which is a function of the two information factors. Our model consists of two layers. The first layer is for directional information, and consists of units arranged in a circular array, and the second layer is for position information, and consists of a two-dimensional array. The units in the second layer accept inputs from all the units in the first layer through plastic inhibitory synapses. It will be shown by computer simulation that the units in the first layer develop direction sensitivity and lose position sensitivity through training, while in the second layer, the units develop position sensitivity and lose direction sensitivity.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004
About this article
Cite this article
Oshiro, N., Kurata, K. Separating visual information into position and direction by two inhibitory connected SOMs. Artif Life Robotics 9, 86–89 (2005). https://doi.org/10.1007/s10015-004-0329-1
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/s10015-004-0329-1