Abstract
This work is about the relevance of Gibson’s concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional views on visual feature representations, we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. We investigate how the originally defined representational concept for the perception of affordances – in terms of using either optical flow or heuristically determined 3D features of perceptual entities – should be generalized to using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information. In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. We argue that affordance-like perception should enable systems to react to environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses to more complex perceptual configurations. We verify the concept with a concrete implementation applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Gibson, J.J.: The Ecological Approach to Visual Perception. Houghton Mifflin, Boston (1979)
Neisser, U.: Cognition and Reality. Principles and Implications of Cognitive Psychology. Freeman & Co., San Francisco (1976)
Gibson, E.J.: Exploratory behavior in the development of perceiving, acting and the acquiring of knowledge. Annual Review of Psychology 39, 1–41 (1988)
Faillenot, I., Toni, I., Decety, J., Grégoire, M.-C., Jeannerod, M.: Visual pathways for object-oriented action and object recognition: functional anatomy with PET. Cerebral Cortex 7, 77–85 (1997)
Fagg, A.H., Arbib, M.A.: Modeling parietal-premotor interaction inprimate control of grasping. Neural Networks 11(7-8), 1277–1303 (1998)
Wheeler, S.D., Fagg, H.A., Grupen, R.A.: Learning Prospective Pick and Place Behavior. In: Proc. 2nd International Conference on Development and Learning, pp. 197–202. IEEE Computer Society, Cambridge (2002)
Paul, F., Metta, G., Natale, L., Rao, S., Sandini, G.: Learning About Objects Through Action - Initial Steps Towards Artificial Cognition. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, May 12-17 (2003)
Stoytchev, A.: Behavior-Grounded Representation of Tool Affordances. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), Barcelona, Spain, April 18-22 (2005)
Stark, L., Bowyer, K.W.: Function-based recognition for multiple object categories. Image Understanding 59(10), 1–21
Rivlin, E., Dickinson, S.J., Rosenfeld, A.: Recognition by functional parts. Computer Vision and Image Understanding 62, 64–176 (1995)
Bogoni, L., Bajcsy, R.: Interactive Recognition and Representation of Functionality. Computer Vision and Image Understanding: CVIU 62(2), 194–214 (1995)
Edwards, M.G., Humphreys, G.W., Castiello, U.: Motor facilitation following action observation: a behavioural study in prehensile action. Brain Cognition 53, 495–502 (2003)
Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)
Quinlan, J.R.: C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)
Cos-Aguilera, I., Cañamero, L., Hayes, G.M., Gillies, A.: Ecological integration of affordances and drives for behaviour selection. In: Bryson, J., et al. (eds.) Proc. Workshop on Modeling Natural Action Selection, pp. 225–228. AISB Press (2005)
Cos-Aguilera, I., Cañamero, L., Hayes, G.M.: Using a SOFM to learn Object Affordances. In: Proc. Workshop of Physical Agents, WAF 2004, Girona, Catalonia, Spain (March 2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Fritz, G., Paletta, L., Kumar, M., Dorffner, G., Breithaupt, R., Rome, E. (2006). Visual Learning of Affordance Based Cues. In: Nolfi, S., et al. From Animals to Animats 9. SAB 2006. Lecture Notes in Computer Science(), vol 4095. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11840541_5
Download citation
DOI: https://doi.org/10.1007/11840541_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-38608-7
Online ISBN: 978-3-540-38615-5
eBook Packages: Computer ScienceComputer Science (R0)