Skip to main content

An Empirical Study of Actor-Critic Methods for Feedback Controllers of Ball-Screw Drivers

  • Conference paper
Book cover Natural and Artificial Computation in Engineering and Medical Applications (IWINAC 2013)

Abstract

In this paper we study the use of Reinforcement Learning Actor-Critic methods to learn the control of a ball-screw feed drive. We have tested three different actors: Q-value based, Policy Gradient and CACLA actors. We have paid special attention to the sensibility to suboptimal learning gain tuning. As a benchmark, we have used randomly-initialized PID controllers. CACLA provides an stable control comparable to the best heuristically tuned PID controller, despite its lack of knowledge of the actual error value.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Koren, Y., Lo, C.C.: Advanced controllers for feed drives. In: Annals of the CIRP, vol. 41 (1992)

    Google Scholar 

  2. Srinivasan, K., Tsao, T.C.: Machine feed drives and their control - a survey of the state of the art. Journal of Manufacturing Science and Engineering 119, 743–748 (1997)

    Article  Google Scholar 

  3. Sutton, R., Barto, A.G.: Reinforcement Learning I: Introduction. MIT Press (1998)

    Google Scholar 

  4. Chen, J.-S., Huang, Y.-K., Cheng, C.C.: Mechanical model and contouring analysis of high-speed ball-screw drive systems with compliance effect. Int. J. Adv. Manuf. Technol. 24, 241–250 (2004)

    Google Scholar 

  5. Hasselt, H.: Reinforcement Learning in Continuous State and Action Spaces. In: Reinforcement Learning: State of the art. Adaptation, Learning, and Optimization, pp. 207–251. Springer (2012)

    Google Scholar 

  6. van Hasselt, H., Wiering, M.A.: Reinforcement learning in continuous action spaces. In: Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (2007)

    Google Scholar 

  7. Busoniu, L., Babuska, R., De Schutter, B., Ernst, D.: Reinforcement Learning and Dynamic Programming using Function Approximation. CRC Press (2010)

    Google Scholar 

  8. Hafner, R., Riedmiller, M.: Reinforcement learning in feedback control: Challenges and benchmarks from technical process control. Machine Learning 84(1-2), 137–169 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fernandez-Gauna, B., Ansoategui, I., Etxeberria-Agiriano, I., Graña, M. (2013). An Empirical Study of Actor-Critic Methods for Feedback Controllers of Ball-Screw Drivers. In: Ferrández Vicente, J.M., Álvarez Sánchez, J.R., de la Paz López, F., Toledo Moreo, F.J. (eds) Natural and Artificial Computation in Engineering and Medical Applications. IWINAC 2013. Lecture Notes in Computer Science, vol 7931. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38622-0_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38622-0_46

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38621-3

  • Online ISBN: 978-3-642-38622-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics