Skip to main content
Log in

Learning position and orientation dynamics from demonstrations via contraction analysis

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

This paper presents a unified framework of model-learning algorithms, called contracting dynamical system primitives (CDSP), that can be used to learn pose (i.e., position and orientation) dynamics of point-to-point motions from demonstrations. The position and the orientation (represented using quaternions) trajectories are modeled as two separate autonomous nonlinear dynamical systems. The special constraints of the \({\mathbb {S}}^{3}\) manifold are enforced in the formulation of the system that models the orientation dynamics. To capture the variability in the demonstrations, the dynamical systems are estimated using Gaussian mixture models (GMMs). The parameters of the GMMs are learned subject to the constraints derived using partial contraction analysis. The learned models’ reproductions are shown to accurately reproduce the demonstrations and are guaranteed to converge to the desired goal location. Experimental results illustrate the CDSP algorithm’s ability to accurately learn position and orientation dynamics and the utility of the learned models in path generation for a Baxter robot arm. The CDSP algorithm is evaluated on a publicly available dataset and a synthetic dataset, and is shown to have the lowest and comparable average reproduction errors when compared to state-of-the-art imitation learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. Following the notation in partial contraction analysis literature (Wang and Slotine 2005), \(\varvec{x}\left( t\right) \) is written twice to represent the dependency of \(\varvec{x}\left( t\right) \) in multiple places in \(f\left( \cdot \right) \).

  2. In \(f_{p}\left( \varvec{x}\left( t\right) ,\varvec{x}\left( t\right) \right) \), the first argument refers to the \(\varvec{x}\left( t\right) \) in \(h_{p}\left( \cdot \right) \) and the second argument refers to the \(\varvec{x}\left( t\right) \) in the affine part of \(f_{p}\left( \cdot \right) \).

  3. In \(f_{o}\left( \varvec{\omega }\left( t\right) ,\varvec{\omega }\left( t\right) ,\varvec{q}\left( t\right) ,\varvec{q}_{g}\right) \), the first argument refers to the \(\varvec{\omega }\left( t\right) \) in \(h_{o}\left( \cdot \right) \) and the second argument refers to the \(\varvec{\omega }\left( t\right) \) in the affine part of \(f_{o}\left( \cdot \right) \).

References

  • Abbeel, P., & Ng, A. Y., (2004). Apprenticeship learning via inverse reinforcement learning. In International conference on machine learning (pp. 1–8). ACM.

  • Ahmadzadeh, R., Rana, M. A., & Chernova, S. (2017). Generalized cylinders for learning, reproduction, generalization, and refinement of robot skills. In Robotics: Science and systems.

  • Akgun, B., Cakmak, M., Jiang, K., & Thomaz, A. L. (2012). Keyframe-based learning from demonstration. International Journal of Social Robotics, 4(4), 343–355.

    Article  Google Scholar 

  • Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.

    Article  Google Scholar 

  • Billard, A., & Matarić, M. J. (2001). Learning human arm movements by imitation: Evaluation of a biologically inspired connectionist architecture. Robotics and Autonomous Systems, 37(2), 145–160.

    Article  MATH  Google Scholar 

  • Bowen, C., & Alterovitz, R. (2014). Closed-loop global motion planning for reactive execution of learned tasks. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1754–1760).

  • Calinon, S., Bruno, D., Caldwell, D. G. (2014). A task-parameterized probabilistic model with minimal intervention control. In IEEE international conference on robotics and automation (ICRA) (pp. 3339–3344).

  • Cohn, D. A., Ghahramani, Z., & Jordan, M. I. (1996). Active learning with statistical models. Journal of Artificial Intelligence Research, 4, 129–145.

    Article  MATH  Google Scholar 

  • Corke, P. I. (2011). Robotics, vision and control: Fundamental algorithms in Matlab. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Dani, A. P., Chung, S. J., & Hutchinson, S. (2015). Observer design for stochastic nonlinear systems via contraction-based incremental stability. IEEE Transactions on Automatic Control, 60(3), 700–714.

    Article  MathSciNet  MATH  Google Scholar 

  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the E–M algorithm. Journal of the Royal Statistical Society Series B (Methodological), 39(1), 1–38.

    Article  MathSciNet  MATH  Google Scholar 

  • Diankov, R. (2010). Automated construction of robotic manipulation programs. Ph.D. thesis, Carnegie Mellon University.

  • Dragan, A. D., Muelling, K., Bagnell, J. A., & Srinivasa, S. S. (2015). Movement primitives via optimization. In IEEE international conference on robotics and automation (ICRA) (pp. 2339–2346).

  • Gribovskaya, E., Khansari-Zadeh, S. M., & Billard, A. (2010). Learning non-linear multivariate dynamics of motion in robotic manipulators. The International Journal of Robotics Research, 30(1), 80–117.

    Article  Google Scholar 

  • Ijspeert, A. J., Nakanishi, J., & Schaal, S. (2002). Learning rhythmic movements by demonstration using nonlinear oscillators. In IEEE/RSJ international conference on intelligent robots and systems (pp. 958–963).

  • Ijspeert, A. J., Nakanishi, J., Hoffmann, H., Pastor, P., & Schaal, S. (2013). Dynamical movement primitives: Learning attractor models for motor behaviors. Neural Computation, 25(2), 328–373.

    Article  MathSciNet  MATH  Google Scholar 

  • Jenkins, O. C., Mataric, M. J., & Weber, S., et al. (2000). Primitive-based movement classification for humanoid imitation. In IEEE-RAS international conference on humanoid robotics.

  • Kalakrishnan, M., Righetti, L., Pastor, P., & Schaal, S. (2012). Learning force control policies for compliant robotic manipulation. In International conference on machine learning (pp. 49–50).

  • Khansari-Zadeh, S. M., & Billard, A. (2010). Imitation learning of globally stable non-linear point-to-point robot motions using nonlinear programming. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2676–2683).

  • Khansari-Zadeh, S. M., & Billard, A. (2011). Learning stable nonlinear dynamical systems with gaussian mixture models. IEEE Transactions on Robotics, 27(5), 943–957.

    Article  Google Scholar 

  • Khansari-Zadeh, S. M., & Billard, A. (2014). Learning control Lyapunov function to ensure stability of dynamical system-based robot reaching motions. Robotics and Autonomous Systems, 62(6), 752–765.

    Article  Google Scholar 

  • Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11), 1238–1274.

    Article  Google Scholar 

  • Koenig, N., & Matarić, M. J. (2016). Robot life-long task learning from human demonstrations: A bayesian approach. Autonomous Robots, 40(6), 1–16.

    Google Scholar 

  • Langsfeld, J. D., Kaipa, K. N., Gentili, R. J., Reggia, J. A., & Gupta, S. K. (2014). Incorporating failure-to-success transitions in imitation learning for a dynamic pouring task. In IEEE/RSJ international conference on intelligent robots and systems.

  • Laumond, J. P., Mansard, N., & Lasserre, J. B. (2014). Optimality in robot motion: Optimal versus optimized motion. Communications of the ACM, 57(9), 82–89.

    Article  Google Scholar 

  • Laumond, J. P., Mansard, N., & Lasserre, J. B. (2015). Optimization as motion selection principle in robot action. Communications of the ACM, 58(5), 64–74.

    Article  Google Scholar 

  • Lemme, A., Neumann, K., Reinhart, R. F., & Steil, J. J. (2013). Neurally imprinted stable vector fields. In European symposium on artificial neural networks (pp. 327–332).

  • Lohmiller, W., & Slotine, J. J. E. (1998). On contraction analysis for nonlinear systems. Automatica, 34(6), 683–696.

    Article  MathSciNet  MATH  Google Scholar 

  • Nemec, B., Tamo, M., Worgotter, F., & Ude, A. (2009). Task adaptation through exploration and action sequencing. In IEEE-RAS international conference on humanoid robots (pp. 610–616).

  • Neumann, K., & Steil, J. J. (2015). Learning robot motions with stable dynamical systems under diffeomorphic transformations. Robotics and Autonomous Systems, 70, 1–15.

    Article  Google Scholar 

  • Paraschos, A., Daniel, C., Peters, J. R., & Neumann, G. (2013). Probabilistic movement primitives. In Advances in neural information processing systems (pp. 2616–2624).

  • Pastor, P., Hoffmann, H., Asfour, T., & Schaal, S. (2009). Learning and generalization of motor skills by learning from demonstration. In IEEE international conference on robotics and automation (pp. 763–768).

  • Peters, J., & Schaal, S. (2008). Natural actor-critic. Neurocomputing, 71(7), 1180–1190.

    Article  Google Scholar 

  • Priess, M. C., Choi, J., & Radcliffe, C. (2014). The inverse problem of continuous-time linear quadratic gaussian control with application to biological systems analysis. In ASME 2014 dynamic systems and control conference.

  • Rai, A., Meier, F., Ijspeert, A., & Schaal, S. (2014). Learning coupling terms for obstacle avoidance. In International conference on humanoid robotics (pp. 512–518).

  • Ravichandar, H., & Dani, A. P. (2015). Learning contracting nonlinear dynamics from human demonstration for robot motion planning. In ASME dynamic systems and control conference (DSCC).

  • Ravichandar, H., & Dani, A. P. (2016). Human modeling for bio-inspired robotics. In Intention Inference for human–robot collaboration in assistive robotics (pp. 217–249). Elsevier.

  • Ravichandar, H., Salehi, I., & Dani, A. (2017). Learning partially contracting dynamical systems from demonstrations. In Proceedings of the 1st annual conference on robot learning (Vol. 78, pp. 369–378). PMLR.

  • Ravichandar, H., Thota, P. K., & Dani, A. P. (2016). Learning periodic motions from human demonstrations using transverse contraction analysis. In American control conference (ACC) (pp. 4853–4858). IEEE.

  • Rossano, G. F., Martinez, C., Hedelind, M., Murphy, S., & Fuhlbrigge, T. A. (2013). Easy robot programming concepts: An industrial perspective. In IEEE international conference on automation science and engineering (CASE) (pp. 1119–1126).

  • Saunders, J., Nehaniv, C. L., & Dautenhahn, K. (2006). Teaching robots by moulding behavior and scaffolding the environment. In ACM SIGCHI/SIGART conference on human–robot interaction (pp. 118–125).

  • Schaal, S. (1999). Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences, 3(6), 233–242.

    Article  Google Scholar 

  • Schaal, S., Mohajerian, P., & Ijspeert, A. (2007). Dynamics systems vs. optimal control-a unifying view. Progress in Brain Research, 165, 425–445.

    Article  Google Scholar 

  • Silvério, J., Rozo, L., Calinon, S., & Caldwell, D. G. (2015). Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 464–470).

  • Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction (Vol. 1). Cambridge: MIT Press.

    MATH  Google Scholar 

  • Todorov, E., & Jordan, M. I. (2002). Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11), 1226–1235.

    Article  Google Scholar 

  • Ude, A., Nemec, B., Petrić, T., & Morimoto, J. (2014). Orientation in cartesian space dynamic movement primitives. In IEEE international conference on robotics and automation (ICRA) (pp. 2997–3004).

  • Ude, A. (1999). Filtering in a unit quaternion space for model-based object tracking. Robotics and Autonomous Systems, 28(2), 163–172.

    Article  Google Scholar 

  • Ude, A., Gams, A., Asfour, T., & Morimoto, J. (2010). Task-specific generalization of discrete and periodic dynamic movement primitives. IEEE Transactions on Robotics, 26(5), 800–815.

    Article  Google Scholar 

  • Wang, C., Zhao, Y., Lin, C. Y., & Tomizuka, M. (2014). Fast planning of well conditioned trajectories for model learning. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1460–1465).

  • Wang, W., & Slotine, J. J. E. (2005). On partial contraction analysis for coupled nonlinear oscillators. Biological Cybernetics, 92(1), 38–53.

    Article  MathSciNet  MATH  Google Scholar 

  • Zucker, M., Ratliff, N., Dragan, A. D., Pivtoraiko, M., Klingensmith, M., Dellin, C. M., et al. (2013). CHOMP: Covariant hamiltonian optimization for motion planning. The International Journal of Robotics Research, 32(9–10), 1164–1193.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge Klaus Neumann and Jochen Steil for providing the SEA means and standard deviations of the seven state-of-the-art algorithms used for comparison presented in Sect. 5.1. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

Funding

Funding was provided by the UTC Institute for Advanced Systems Engineering (UTC-IASE) of the University of Connecticut and the United Technologies Corporation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashwin Dani.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 33061 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ravichandar, H.c., Dani, A. Learning position and orientation dynamics from demonstrations via contraction analysis. Auton Robot 43, 897–912 (2019). https://doi.org/10.1007/s10514-018-9758-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-018-9758-x

Keywords

Navigation