Abstract:
We present a neural architecture for scene representation that stores semantic information about objects in the robot's workspace. We show how this representation can be ...Show MoreMetadata
Abstract:
We present a neural architecture for scene representation that stores semantic information about objects in the robot's workspace. We show how this representation can be queried both through low-level features such as color and size, through feature conjunctions, as well as through symbolic labels. This is possible by binding different feature dimensions through space and integrating these space-feature representations with an object recognition system. Queries lead to the activation of a neural representation of previously seen objects, which can then be used to drive object-oriented action. The representation is continuously linked to sensory information and autonomously updates when objects are moved or removed.
Date of Conference: 24-27 August 2011
Date Added to IEEE Xplore: 10 October 2011
ISBN Information: