Skip to main content

Evolution of Neuro-controllers for Multi-link Robots

  • Chapter
Book cover Innovations in Hybrid Intelligent Systems

Part of the book series: Advances in Soft Computing ((AINSC,volume 44))

Abstract

A general method to learn the inverse kinematics of multi-link robots by means of neuro-controllers is presented. We can find analytical solutions for the most used and known robots in the bibliography. However, these solutions are specific to a particular robot configuration and are not generally applicable to other robot morphologies. The proposed method is general in the sense that it is not dependant on the robot morphology. We base our method in the Evolutionary Computation paradigm for obtaining incrementally better neuro-controllers. Furthermore, the proposed method solves some very specific issues in robotic neuro-controller learning. (1) It allows to escape from any neural network learning algorithm which relies on the classical supervised input-target learning scheme and hence it lets to obtain neuro-controllers without providing targets or correct answers which -in this case- are un known in prior. (2) It can converge beyond local optimal solutions which is one of the main drawbacks of some neural-network training algorithms based on gradient descent when applied to highly redundant robot morphologies. (3) Using learning algorithms such as the Neuro-Evolution of Augmenting Topologies (NEAT) it is also possible learning the neural network topology on-the-fly which is a common source of empirical testing in neuro-controllers design. Finally, experimental results are provided by applying the method in two multi-link robot learning tasks with a comparison between fixed and learnable topologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. Denavit and R. Hartenberg. A kinematic notation for lower-pair mechanisms based on matrices. ASME J. Applied Mechanics, pp. 215–221, June 1955.

    Google Scholar 

  2. P.J. Angeline, G.M. Saunders, and J.B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans. Neural Networks, 5(1):54–65, 1994.

    Article  Google Scholar 

  3. N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Int. Conf. on Evolutionary Computation, pp 312–317, 1996.

    Google Scholar 

  4. K.O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evol. Comput., 10(2):99–127, 2002.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Martín, J.A.H., de Lope, J., Santos, M. (2007). Evolution of Neuro-controllers for Multi-link Robots. In: Corchado, E., Corchado, J.M., Abraham, A. (eds) Innovations in Hybrid Intelligent Systems. Advances in Soft Computing, vol 44. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74972-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74972-1_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74971-4

  • Online ISBN: 978-3-540-74972-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics