Skip to main content

Advertisement

Log in

Learning and Recognition of Hybrid Manipulation Motions in Variable Environments Using Probabilistic Flow Tubes

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

For robots to work effectively with humans, they must learn and recognize activities that humans perform. We enable a robot to learn a library of activities from user demonstrations and use it to recognize an action performed by an operator in real time. Our contributions are threefold: (1) a novel probabilistic flow tube representation that can intuitively capture a wide range of motions and can be used to support compliant execution; (2) a method to identify the relevant features of a motion, and ensure that the learned representation preserves these features in new and unforeseen situations; (3) a fast incremental algorithm for recognizing user-performed motions using this representation. Our approach provides several capabilities beyond those of existing algorithms. First, we leverage temporal information to model motions that may exhibit non-Markovian characteristics. Second, our approach can identify parameters of a motion not explicitly specified by the user. Third, we model hybrid continuous and discrete motions in a unified representation that avoids abstracting out the continuous details of the data. Experimental results show a 49 % improvement over prior art in recognition rate for varying environments, and a 24 % improvement for a static environment, while maintaining average computing times for incremental recognition of less than half of human reaction time. We also demonstrate motion learning and recognition capabilities on real-world robot platforms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Algorithm 2
Algorithm 3
Fig. 4
Fig. 5
Algorithm 4
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Abbeel P, Dolgov D, Ng AY, Thrun S (2008) Apprenticeship learning for motion planning with application to parking lot navigation. In: IROS

    Google Scholar 

  2. Akgun B, Cakmak M, Yoo JW, Thomaz AL (2012) Trajectories and keyframes for kinesthetic teaching: a human-robot interaction perspective. In: HRI

    Google Scholar 

  3. Alissandrakis A, Nehaniv CL, Dautenhahn K, Saunders J (2005) An approach for programming robots by demonstration: generalization across different initial configurations of manipulated objects. In: IEEE international symposium on computational intelligence in robotics and automation

    Google Scholar 

  4. Argall B, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(3):469–483

    Article  Google Scholar 

  5. Atkeson CG, Schaal S (1997) Robot learning from demonstration. In: ICML, pp 12–20

    Google Scholar 

  6. Calinon S, D’halluin F, Sauser E, Caldwell D, Billard A (2010) Learning and reproduction of gestures by imitation: an approach based on hidden Markov model and Gaussian mixture regression. IEEE Robot Autom Mag 17(2):44–54

    Article  Google Scholar 

  7. Calinon S, Guenter F, Billard A (2007) On learning, representing and generalizing a task in a humanoid robot. IEEE Trans Syst Man Cybern, Part B, Cybern 37:286–298

    Article  Google Scholar 

  8. Cederborg T, Li M, Baranes A, Oudeyer PY (2010) Incremental local online Gaussian mixture regression for imitation learning of multiple tasks. In: IROS

    Google Scholar 

  9. Coates A, Abbeel P, Ng A (2009) Apprenticeship learning for helicopter control. Commun ACM 52(7):97–105

    Article  Google Scholar 

  10. Dixon S (2005) An on-line time warping algorithm for tracking musical performances. In: IJCAI

    Google Scholar 

  11. Dong S, Conrad PR, Shah JA, Williams BC, Mittman DS, Ingham MD, Verma V (2011) Compliant task execution and learning for safe mixed-initiative human-robot operations. In: AIAA Infotech

    Google Scholar 

  12. Dong S, Williams B (2011) Motion learning in variable environments using probabilistic flow tubes. In: ICRA

    Google Scholar 

  13. Frank J, Mannor S, Precup D (2010) Activity and gait recognition with time-delay embeddings. In: AAAI

    Google Scholar 

  14. Hamid R, Maddi S, Johnson A, Bobick A, Essa I, Isbell C (2009) A novel sequence representation for unsupervised analysis of human activities. Artif Intell 173(14):1221–1244

    Article  MathSciNet  Google Scholar 

  15. Hoffmann H, Pastor P, Park DH, Schaal S (2009) Biologically-inspired dynamical systems for movement generation: automatic real-time goal adaptation and obstacle avoidance. In: ICRA

    Google Scholar 

  16. Hofmann A, Williams B (2006) Exploiting spatial and temporal flexibility for plan execution of hybrid, under-actuated systems. In: AAAI

    Google Scholar 

  17. Krogh BH (1984) A generalized potential field approach to obstacle avoidance control. In: International robotics research conference, Bethlehem, PA

    Google Scholar 

  18. Lee D, Ott C (2011) Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Auton Robots 31(2–3):115–131

    Article  Google Scholar 

  19. Li H, Williams B (2008) Generative planning for hybrid systems based on flow tubes. In: ICAPS

    Google Scholar 

  20. Martin RA, Wheeler KR, Allan MB, SunSpiral V (2010) Optimized algorithms for prediction within robotic tele-operative interfaces. Technical report, NASA/TM-2010-216417

  21. Mühlig M, Giengerand M, Hellbachand S, Steil J, Goerick C (2009) Task-level imitation learning using variance-based movement optimization. In: ICRA

    Google Scholar 

  22. Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man Cybern 37(3):311–324

    Article  Google Scholar 

  23. Moni MA, Ali ABMS (2009) HMM based hand gesture recognition: a review on techniques and approaches. In: IEEE ICCSIT

    Google Scholar 

  24. Myers CS, Rabiner LR, Rosenberg AE (1979) Performance trade-offs in dynamic time warping algorithms for isolated word recognition. J Acoust Soc Am 66(S1):S34–S35

    Article  Google Scholar 

  25. Osentoski S, Manfredi V, Mahadevan S (2004) Learning hierarchical models of activity. In: IROS, Sendai, Japan

    Google Scholar 

  26. Park DH, Hoffmann H, Pastor P, Schaal S (2008) Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields. In: IEEE-RAS international conference on humanoid robots

    Google Scholar 

  27. Pastor P, Hoffmann H, Asfour T, Schaal S (2009) Learning and generalization of motor skills by learning from demonstration. In: ICRA

    Google Scholar 

  28. Peters RA, Campbell CL (2003) Robonaut task learning through teleoperation. In: ICRA

    Google Scholar 

  29. Rasmussen CE, Williams CKI (2006) Gaussian processes for machine learning. MIT Press, Cambridge

    MATH  Google Scholar 

  30. Riley M, Cheng G (2011) Extracting and generalizing primitive actions from sparse demonstration. In: IEEE-RAS international conference on humanoid robots

    Google Scholar 

  31. Salvador S, Chan P (2007) FastDTW: Toward accurate dynamic time warping in linear time and space. Intell Data Anal 11(5):561–580

    Google Scholar 

  32. Senin P (2008) Dynamic time warping algorithm review. Technical report, University of Hawaii at Manoa

  33. SunSpiral V, Wheeler KR, Allan MB, Martin R (2006) Modeling and classifying Six-dimensional trajectories for teleoperation under a time delay. In AAAI spring symposium

    Google Scholar 

  34. Wakabayashi S, Margruder DF, Bluethmann W (2003) Test of operator endurance in the teleoperation of an anthropomorphic hand. In: SAIRAS

    Google Scholar 

  35. Wang Z, Li B (2009) Human activity encoding and recognition using low-level visual features. In: IJCAI

    Google Scholar 

  36. Yang J, Xu Y, Chen CS (1997) Human action learning via hidden Markov model. IEEE Trans Syst Man Cybern, Part A, Syst Hum 27(1):34–44

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank David Mittman, Sarah Osentoski, and Shuo Wang for their help in operating the different robots used in our demonstrations and experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuonan Dong.

Additional information

This research was supported by a National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. Additional support was provided by a NASA JPL Strategic University Research Partnership.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dong, S., Williams, B. Learning and Recognition of Hybrid Manipulation Motions in Variable Environments Using Probabilistic Flow Tubes. Int J of Soc Robotics 4, 357–368 (2012). https://doi.org/10.1007/s12369-012-0155-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-012-0155-x

Keywords

Navigation