Skip to main content
Log in

Motor Skills Learning and Generalization with Adapted Curvilinear Gaussian Mixture Model

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This paper is intended to solve the motor skills learning, representation and generalization problems in robot imitation learning. To this end, we present an Adapted Curvilinear Gaussian Mixture Model (AdC-GMM), which is a general extension of the GMM. The proposed model can encode data more compactly. More critically, it is inherently suitable for representing data with strong non-linearity. To infer the parameters of this model, a Cross Entropy Optimization (CEO) algorithm is proposed, where the cross entropy loss of the training data is minimized. Compared with the traditional Expectation Maximization (EM) algorithm, the CEO can automatically infer the optimal number of components. Finally, the generalized trajectories are retrieved by an Adapted Curvilinear Gaussian Mixture Regression (AdC-GMR) model. To encode observations from different frames, the sophisticated task parameterization (TP) technique is introduced. All above proposed algorithms are verified by comprehensive tasks. The CEO is evaluated by a hand writing task. Another goal-directed reaching task is used to evaluate the AdC-GMM and AdC-GMR algorithm. A novel hammer-over-a-nail task is designed to verify the task parameterization technique. Experimental results demonstrate the proposed CEO is superior to the EM in terms of encoding accuracy and the AdC-GMM can achieve more compact representation by reducing the number of components by up to 50%. In addition, the trajectory retrieved by the AdC-GMR is smoother and the approximation error is comparable to the Gaussian process regression (GPR) even far fewer parameters need to be estimated. Because of this, the AdC-GMR is much faster than the GPR. Finally, simulation experiments on the hammer-over-a-nail task demonstrates the proposed methods can be deployed and used in real-world applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Billard, A.: Learning motor skills by imitation: a biologically inspired robotic model. Cybern. Syst. 32(1-2), 155–193 (2001)

    Article  Google Scholar 

  2. Billard, A.G., Calinon, S., Dillmann, R.: Learning from Humans. In: Springer handbook of robotics, pp. 1995–2014. Springer (2016)

  3. Calinon, S.: A tutorial on task-parameterized movement learning and retrieval. Intell. Serv. Robot. 9(1), 1–29 (2016)

    Article  Google Scholar 

  4. Calinon, S., Lee, D.: Learning Control. In: Vadakkepat, P., Goswami, A. (eds.) Humanoid robotics: a reference. Springer (2018)

  5. Dillmann, R., Kaiser, M., Ude, A.: Acquisition of elementary robot skills from human demonstration. In: International Symposium on Intelligent Robotics Systems, pp. 185–192 (1995)

  6. Gams, A., Nemec, B., Ijspeert, A.J., Ude, A.: Coupling movement primitives: interaction with the environment and bimanual tasks. IEEE Trans. Robot. 30(4), 816–830 (2014)

    Article  Google Scholar 

  7. Gribovskaya, E., Khansari-Zadeh, S., Billard, A.: Learning non-linear multivariate dynamics of motion in robotic manipulators. Int. J. Robot. Res. 30(1), 80–117 (2011). https://doi.org/10.1177/0278364910376251

    Article  Google Scholar 

  8. Gribovskaya, E., Khansari-Zadeh, S.M., Billard, A.: Learning non-linear multivariate dynamics of motion in robotic manipulators. Int. J. Robot. Res. 30(1), 80–117 (2011)

    Article  Google Scholar 

  9. Ijspeert, A.J., Nakanishi, J., Hoffmann, H., Pastor, P., Schaal, S.: Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput. 25(2), 328–373 (2013)

    Article  MathSciNet  Google Scholar 

  10. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Advances in Neural Information Processing Systems, pp. 1547–1554 (2003)

  11. Ju, Z., Liu, H.: Fuzzy Gaussian mixture models. Pattern Recogn. 45(3), 1146–1158 (2012). https://doi.org/10.1016/j.patcog.2011.08.028

    Article  MATH  Google Scholar 

  12. Kaiser, M., Dillmann, R.: Building Elementary Robot Skills from Human Demonstration. In: Proceedings of the 1996 IEEE International Conference on Robotics and Automation, 1996, vol. 3, pp. 2700–2705 (1996)

  13. Khansari-Zadeh, S., Billard, A.: Learning stable nonlinear dynamical systems with gaussian mixture models. IEEE Trans. Robot. 27(5), 943–957 (2011). https://doi.org/10.1109/TRO.2011.2159412

    Article  Google Scholar 

  14. Khansari-Zadeh, S. M., Billard, A.: Learning stable nonlinear dynamical systems with gaussian mixture models. IEEE Trans. Robot. 27(5), 943–957 (2011)

    Article  Google Scholar 

  15. Khansari-Zadeh, S.M., Billard, A.: A dynamical system approach to realtime obstacle avoidance. Auton. Robot. 32(4), 433–454 (2012). The final publication is available at www.springerlink.com

    Article  Google Scholar 

  16. Kober, J., Peters, J.: Reinforcement learning in robotics: a survey, vol. 12, pp 579–610. Springer, Berlin (2012)

    Book  Google Scholar 

  17. Kronander, K., Khansari, M., Billard, A.: Incremental motion learning with locally modulated dynamical systems. Robot. Auton. Syst. 70(C), 52–62 (2015). https://doi.org/10.1016/j.robot.2015.03.010

    Article  Google Scholar 

  18. Lee, J.: A survey of robot learning from demonstrations for human-robot collaboration. arXiv:1710.08789 (2017)

  19. Liu, S., Asada, H.: Teaching and Learning of Deburring Robots Using Neural Networks. In: Proceedings of the 1993 IEEE International Conference on Robotics and Automation, 1993, pp. 339–345 (1993)

  20. Mayer, H., Nagy, I., Knoll, A.: Skill transfer and learning by demonstration in a realistic scenario of laparoscopic surgery. In: Proceedings of the IEEE International Conference on Humanoids CD-ROM. Munich, Germany (2003)

  21. Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S., Kawato, M.: Learning from demonstration and adaptation of biped locomotion. Robot. Auton. Syst. 47(2), 79–91 (2004)

    Article  Google Scholar 

  22. Nguyen-Tuong, D., Peters, J.R., Seeger, M.: Local Gaussian process regression for real time online model learning. In: Advances in Neural Information Processing Systems, pp. 1193–1200 (2009)

  23. Paraschos, A., Daniel, C., Peters, J.R., Neumann, G.: Probabilistic movement primitives. In: Advances in Neural Information Processing Systems, pp. 2616–2624 (2013)

  24. Park, D.H., Hoffmann, H., Pastor, P., Schaal, S.: Movement Reproduction and Obstacle Avoidance with Dynamic Movement Primitives and Potential Fields. In: 2008 8Th IEEE-RAS International Conference on Humanoid Robots. Humanoids 2008, pp. 91–98 (2008)

  25. Rasmussen, C.E.: Gaussian processes in machine learning. In: Advanced Lectures on Machine Learning, pp. 63–71. Springer (2004)

  26. Schaal, S.: Scalable locally weighted statistical techniques for real time robot learning. Appl. Intell. Special Issue Scalable Robot. Appl. Neural Netw. 17, 49–60 (2002)

    MATH  Google Scholar 

  27. Schaal, S.: Dynamic movement primitives-a framework for motor control in humans and humanoid robotics. In: Adaptive Motion of Animals and Machines, pp. 261–280. Springer (2006)

  28. Sun, K., Mou, S., Qiu, J., Wang, T., Gao, H.: Adaptive fuzzy control for non-triangular structural stochastic switched nonlinear systems with full state constraints. IEEE Trans. Fuzzy Syst., pp. 1–1 (2018)

  29. Tabor, J., Spurek, P.: Cross-entropy clustering. Pattern Recogn. 47(9), 3046–3059 (2014)

    Article  Google Scholar 

  30. Vakanski, A., Mantegh, I., Irish, A., Janabi-Sharifi, F.: Trajectory learning for robot programming by demonstration using hidden markov model and dynamic time warping. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 42(4), 1039–1052 (2012)

    Article  Google Scholar 

  31. Vijayakumar, S., D’Souza, A., Schaal, S.: Incremental online learning in high dimensions. Neural Comput. 17(12), 2602–2634 (2005). https://doi.org/10.1162/089976605774320557

    Article  MathSciNet  Google Scholar 

  32. Zeestraten, M., Havoutis, I., Silvério, J., Calinon, S., Caldwell, D.G.: An approach for imitation learning on Riemannian manifolds. IEEE Robot. Autom. Lett. (RA-L) 2(3), 1240–1247 (2017)

    Article  Google Scholar 

  33. Zhang, B., Zhang, C., Yi, X.: Active curve axis gaussian mixture models. Pattern Recognition 38(12), 2351–2362 (2005). https://doi.org/10.1016/j.patcog.2005.01.017

    Article  Google Scholar 

Download references

Acknowledgements

Special thanks to my supervisor Prof. Weijia Zhou for his support on this work and Guangyu Zhang for his help on experimental data collection. Thanks to Nailong Liu for his technical support. We also thank the anonymous reviewers for their comments on a preliminary version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuquan Leng.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Proof of Equation (5)

Proof

According to Theorem 1 we have

$$\begin{array}{@{}rcl@{}} &&H^{\times} \left( \mu \Vert \mathcal{N}({\boldsymbol m}, {\boldsymbol {\Sigma}})\right) = H^{\times} \left( \mu_{\mathcal{G}} \Vert \mathcal{N}({\boldsymbol m}, {\boldsymbol {\Sigma}})\right) \\&&= H^{\times} \left( \mathcal{N}({\boldsymbol m}_{\mu}, {\boldsymbol {\Sigma}}_{\mu}) \Vert \mathcal{N}({\boldsymbol m}, {\boldsymbol {\Sigma}})\right) \end{array} $$

So the purpose is to prove

$$\begin{array}{@{}rcl@{}} &&{} H^{\times} \left( \mathcal{N}({\boldsymbol m}_{\mu}, {\boldsymbol {\Sigma}}_{\mu}) \Vert \mathcal{N}({\boldsymbol m}, {\boldsymbol {\Sigma}})\right) = \frac{N}{2}\ln (2\pi ) + \frac{1}{2}\text{tr}\left( {{{\boldsymbol {\Sigma}}^{- 1}}{{\boldsymbol {\Sigma}}_{\mu} }} \right) \\&&{} + \frac{1}{2}\ln \det \left( {\boldsymbol {\Sigma}} \right) + \frac{1}{2}\left\| {{\boldsymbol m} - {{\boldsymbol m}_{\mu} }} \right\|_{\boldsymbol {\Sigma}}^{2}. \end{array} $$

To prove this equation, we include another theorem, which declared that

Theorem 1

Let fbe a density function,\(A:\mathbb {R}^{N} \rightarrow \mathbb {R}^{N}\)isinvertible affine transform, define a transform\(f_{A}: x \rightarrow f(A^{-1}x) / |\det A|\),then we have

$$ H^{\times} (\mu \circ {\boldsymbol A}^{-1} \Vert f_{\boldsymbol A}) =H^{\times} (\mu \Vert f) + \ln \left|\det {\boldsymbol A} \right| $$
(28)

where \(\det A\) is the linear component of A. The proof is as follows

$$\begin{array}{@{}rcl@{}} && H^{\times} (\mu \circ {\boldsymbol A}^{-1} \Vert {f_{\boldsymbol A}})\\ &&= - \int \ln{\frac{f(\boldsymbol{A}^{-1} x)}{\left| \det \boldsymbol{A} \right|}} \mathrm{d} \mu \circ \boldsymbol{A}^{-1}(x) \\ &&= \ln \left| \det \boldsymbol{A} \right| - \int \ln{f(\boldsymbol{A}^{-1}x)} \mathrm{d} \mu \circ \boldsymbol{A}^{-1}(x) \\ && \overset{let y = \boldsymbol{A}^{-1}x}{\Longrightarrow} \ln \left| \det \boldsymbol{A} \right| - \int \ln{f(y)} \mathrm{d} \mu (y) \\ &&= H^{\times} (\mu \Vert f) + \ln \left|\det {\boldsymbol A} \right| \end{array} $$

Defining a transformation A : Σ− 1(x − m) and apply it to both densities in Eq. 5. Let g and f be the transformed densities of μ and \(\mathcal {N}(\boldsymbol {m, {\Sigma }})\), resptectively. Then g is a Gaussian with center mg and covariance Σg and f is a standard normal distribution with zero mean and identity covariance I. Using Theorem 2, we have

$$ H^{\times} \left( \mathcal{N}({\boldsymbol m}_{\mu}, {\boldsymbol {\Sigma}}_{\mu}) \Vert \mathcal{N}({\boldsymbol m}, {\boldsymbol {\Sigma}})\right) = H^{\times} (g \Vert f) + \ln |\det {\Sigma}| $$
(29)

where,

$$\begin{array}{@{}rcl@{}} && H^{\times} (g \Vert f) = H^{\times} (g \Vert \mathcal{N}(0, I)) \\ &&= \int f_{g}(x) [\frac{N}{2} \ln(2\pi) + \frac{1}{2}\ln \det(I) + \frac{1}{2}||\boldsymbol{x}||^{2}] dx \quad\quad\quad\quad\quad\quad\text{Using definition (4)}\\ &&= \frac{N}{2} \ln(2\pi) + \frac{1}{2} \int f_{g}(x)|| (\boldsymbol{x} - \boldsymbol{m}_{g}) + \boldsymbol{m}_{g}||^{2} dx \\ &&= \frac{N}{2} \ln(2\pi) + \frac{1}{2} \int f_{g}(x) \left[|| \boldsymbol{(x - m_{g})}||^{2} + ||\boldsymbol{m}_{g}||^{2} + 2(\boldsymbol{x - m_{g}})^{T} \boldsymbol{m_{g}} \right] dx \\ &&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~~~~\mathbb{E}[c] = c, c \text{~is a constant}\\ &&= \frac{N}{2} \ln(2\pi) + \frac{1}{2}||\boldsymbol{m}_{g}||^{2} + \frac{1}{2} \mathbb{E}_{g} \left\{ tr[\boldsymbol{(x - m_{g})}\boldsymbol{(x - m_{g})}^{T}]\right\} \qquad~~~{tr(A) = tr(A^{T})} \\ &&= \frac{N}{2} \ln(2\pi) + \frac{1}{2}tr(\boldsymbol{\Sigma}_{g}) + \frac{1}{2}||\boldsymbol{m}_{g}||^{2} \end{array} $$
(30)

where \({\Sigma }_{g} = {\Sigma }^{-1} {\Sigma }_{\mu }\) and \(||m_{g}||^{2} = ||m-m_{\mu }||_{\Sigma }^{2}\). Substituting (30) into Eq. 29 and plug in the value of mg and Σg, we have (5) □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Leng, Y. Motor Skills Learning and Generalization with Adapted Curvilinear Gaussian Mixture Model. J Intell Robot Syst 96, 457–475 (2019). https://doi.org/10.1007/s10846-019-00999-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-019-00999-y

Keywords

Navigation