Skip to main content

A Camera-Direction Dependent Visual-Motor Coordinate Transformation for a Visually Guided Neural Robot

  • Conference paper

Abstract

Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of one’s head and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose a simple and fully trained attractor network which computes head-centred coordinates given eye position and a perceived retinal object position. We demonstrate this system on artificial data and then apply it within a fully neurally implemented control system which visually guides a simulated robot to a table for grasping an object. The integrated system has as input a primitive visual system with a what-where pathway which localises the target object in the visual field. The coordinate transform network considers the visually perceived object position and the camera pan-tilt angle and computes the target position in a body-centred frame of reference. This position is used by a reinforcement-trained network to dock a simulated PeopleBot robot at a table for reaching the object. Hence, neurally computing coordinate transformations by an attractor network has biological relevance and technical use for this important class of computations.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–69, 1985.

    Article  Google Scholar 

  2. C.A. Buneo, M.R. Jarvis, A.P. Batista, and R.A. Andersen. Direct visuomotor transformations for reaching. Nature, 416:632–6, 2002.

    Article  Google Scholar 

  3. Y.E. Cohen and R.A. Andersen. A common reference frame for movement plans in the posterior parietal cortex. Nature Review Neuroscience, 3:553–62, 2002.

    Article  Google Scholar 

  4. J.D. Crawford, W.P. Medendorp, and J.J. Marotta. Spatial transformations for eye-hand coordination. J. Neurophysiol., 92:10–9, 2004.

    Article  Google Scholar 

  5. S. Deneve, P.E. Latham, and A. Pouget. Efficient computation and cue integration with noisy population codes. Nature Neurosci., 4(8):826–31, 2001.

    Article  Google Scholar 

  6. S. Deneve and A. Pouget. Basis functions for object-centered representations. Neuron, 37:347–59, 2003.

    Article  Google Scholar 

  7. S.R. Friedman-Hill, L.C. Robertson, L.G. Ungerleider, and R. Desimone. Posterior parietal cortex and the filtering of distractors. PNAS, 100(7):4263–8, 2003.

    Article  Google Scholar 

  8. Z. Ghahramani, D.M. Wolpert, and M.I. Jordan. Generalization to local remappings of the visuomotor coordinate transformation. J. Neurosci., 16(21):7085–96, 1996.

    Google Scholar 

  9. S. Haykin. Neural Networks. A Comprehensive Foundation. MacMillan College Publishing Company, 1994.

    Google Scholar 

  10. K. Nakamura, H.H. Chung, M.S.A. Graziano, and C.G. Gross. Dynamic representation of eye position in the parieto-occipital sulcus. J. Neurophysiol., 81:2374–85, 1999.

    Google Scholar 

  11. L. Natale, G. Metta, and G. Sandini. A developmental approach to grasping. In Developmental Robotics AAAI Spring Symposium, 2005.

    Google Scholar 

  12. R.C. O’Reilly. Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm. Neur. Comp., 8:895–938, 1996.

    Article  Google Scholar 

  13. A. Raffone and C. van Leeuwen. Dynamic synchronization and chaos in an associative neural network with multiple active memories. Chaos, 13:1090–104, 2003.

    Article  MATH  MathSciNet  Google Scholar 

  14. E. Sauser and A. Billard. Three dimensional frames of references transformations using recurrent populations of neurons. Neurocomputing, 64:5–24, 2005.

    Article  Google Scholar 

  15. E. Sauser and A. Billard. View sensitive cells as a neural basis for the representation of others in a self-centered frame of reference. In Proceedings of the Third International Symposium on Imitation in Animals and Artifacts, Hatfield, UK, 2005.

    Google Scholar 

  16. T. Doi. Sony vice president. www.sony.net/sonyinfo/qrio/interview, 2004.

    Google Scholar 

  17. A. van Rossum and A. Renart. Computation with populations codes in layered networks of integrate-and-fire neurons. Neurocomputing, 58–60:265–70, 2004.

    Google Scholar 

  18. C. Weber, S. Wermter, and A. Zochios. Robot docking with neural vision and reinforcement. Knowledge-Based Systems, 17(2–4):165–72, 2004.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag London Limited

About this paper

Cite this paper

Weber, C., Muse, D., Elshaw, M., Wermter, S. (2006). A Camera-Direction Dependent Visual-Motor Coordinate Transformation for a Visually Guided Neural Robot. In: Macintosh, A., Ellis, R., Allen, T. (eds) Applications and Innovations in Intelligent Systems XIII. SGAI 2005. Springer, London. https://doi.org/10.1007/1-84628-224-1_12

Download citation

  • DOI: https://doi.org/10.1007/1-84628-224-1_12

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84628-223-2

  • Online ISBN: 978-1-84628-224-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics