Abstract
Efficient acquisition of new motor skills is among the most important abilities in order to make robot application more flexible, reduce the amount and cost of human programming as well as to make future robots more autonomous. However, most machine learning approaches to date are not capable to meet this challenge as they do not scale into the domain of high dimensional anthropomorphic and service robots. Instead, robot skill learning needs to rely upon task-appropriate approaches and domain insights. A particularly powerful approach has been driven by the concept of re-usable motor primitives. These have been used to learn a variety of “elementary movements” such as striking movements (e.g., hitting a T-ball, striking a table tennis ball), rhythmic movements (e.g., drumming, gaits for legged locomotion, padlling balls on a string), grasping, jumping and many others. Here, we take the approach to the next level and show experimentally how most elements required for table tennis can be addressed using motor primitives. We show four important components: (i) We present a motor primitive formulation that can deal with hitting and striking movements. (ii) We show how these can be initialized by imitation learning and (iii) generalized by reinforcement learning. (iv) We show how selection, generalization and pruning for motor primitives can be dealt with using a mixture of motor primitives. The resulting experimental prototypes can be shown to work well in practice.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Acosta, L., Rodrigo, J.J., Mendez, J.A., Marchial, G.N., Sigut, M.: Ping-pong player prototype. Robotics and Automation Magazine 10, 44–52 (2003)
Andersson, R.L.: A robot ping-pong player: experiment in real-time intelligent control. MIT Press, Cambridge (1988)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)
Chiappa, S., Kober, J., Peters, J.: Using bayesian dynamical systems for motion template libraries. In: Advances in Neural Information Processing Systems 22, NIPS 2008 (2008)
Fassler, H., Vasteras, H.A., Zurich, J.W.: A robot ping pong player: optimized mechanics, high performance 3d vision, and intelligent sensor control. Robotersysteme, 161–170 (1990)
Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Advances in Neural Information Processing Systems 16 (NIPS), vol. 15, pp. 1547–1554. MIT Press, Cambridge (2003)
Kober, J., Muelling, K., Kroemer, O., Lampert, C., Schölkopf, B., Peters, J.: Movement templates for learning of hitting and batting. In: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2010 (2010)
Kober, J., Oztop, E., Peters, J.: Reinforcement learning to adjust robot movements to new situations. In: Robotics: Science and Systems (2010)
Kober, J., Peters, J.: Policy search for motor primitives in robotics. In: Advances in Neural Information Processing Systems 21, pp. 849–856. MIT press, Cambridge (2009)
Matsubara, T., Hyon, S., Morimoto, J.: Learning stylistic dynamic movement primitives from multiple demonstrations. In: IEEE/RSJ International Conference on Intelligent RObots and Systems (2010)
Miyazaki, F., Matsushima, M., Takeuchi, M.: Learning to dynamically manipulate: A table tennis robot controls a ball and rallies with a human being. In: Advances in Robot Control, pp. 3137–3341. Springer (2005)
Mülling, K., Kober, J., Peters, J.: A biomimetic approach to robot table tennis. In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2010 (2010)
Mülling, K., Kober, J., Peters, J.: Learning table tennis with a mixture of motor primitives. In: Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010 (2010)
Peters, J., Muelling, K., Altun, Y.: Relative entropy policy search. In: Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence, AAAI 2010 (2010)
Peters, J., Mülling, K., Kober, J., Nguyen-Tuong, D., Kroemer, O.: Towards motor skill learning for robotics. In: Proceedings of the 14th International Symposium on Robotics Research, ISRR 2009 (2009)
Peters, J., Schaal, S.: Reinforcement learning by reward-weighted regression for operational space control. In: Proceedings of the International Conference on Machine Learning, ICML (2007)
Schaal, S.: The SL simulation and real-time control software package. Technical report (in preparation)
Schaal, S., Mohajerian, P., Ijspeert, A.J.: Dynamics systems vs. optimal control – a unifying view. Progress in Brain Research 165(1), 425–445 (2007)
Schaal, S., Peters, J., Nakanishi, J., Ijspeert, A.J.: Learning motor primitives. In: International Symposium on Robotics Research (2003)
Theodorou, E., Buchli, J., Schaal, S.: A generalized path integral control approach to reinforcement learning. Journal of Machine Learning Research 11, 3137–3181 (2010)
Ude, A., Gams, A., Asfour, T., Morimoto, J.: Task-specific generalization of discrete and periodic dynamic movement primitives. IEEE Transactions on Robotics 26(5), 800–815 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag GmbH Berlin Heidelberg
About this chapter
Cite this chapter
Peters, J., Mülling, K., Kober, J. (2014). Experiments with Motor Primitives in Table Tennis. In: Khatib, O., Kumar, V., Sukhatme, G. (eds) Experimental Robotics. Springer Tracts in Advanced Robotics, vol 79. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28572-1_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-28572-1_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-28571-4
Online ISBN: 978-3-642-28572-1
eBook Packages: EngineeringEngineering (R0)