Skip to main content

Advertisement

Log in

A deep learning framework for realistic robot motion generation

  • Special issue on Human-in-the-loop Machine Learning and its Applications
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Humanoid robots are being developed to play the role of personal assistants. With the development of artificial intelligence technology, humanoid robots are expected to perform many human tasks, such as housework, human care, and even medical treatment. However, robots cannot currently move flexibly like humans, which affects their fine motor skill performance. This is primarily because traditional robot control methods use manipulators that are difficult to articulate well. To solve this problem, we propose a nonlinear realistic robot motion generation method based on deep learning. Our method benefits from decomposing human motions into basic motions and realistic motions using the multivariate empirical mode decomposition and learning the biomechanical relationships between them by using an autoencoder generation network. The experimental results show that realistic motion features can be learned by the generation network and motion realism can be increased by adding the learned motions to the robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

References

  1. Ding M, Ikeura R, Mori Y, Mukai T, Hosoe S (2013) Measurement of human body stiffness for lifting-up motion generation using nursing-care assistant robot–RIBA. In Sensors, IEEE. 1–4

  2. Borovac B, Gnjatović M, Savić S, Raković M, Nikolić M (2016) Human-like robot marko in the rehabilitation of children with cerebral palsy. New Trend Med Service Robots. 191–203. Springer, Cham

  3. Nishiguchi S, Ogawa K, Yoshikawa Y, Chikaraishi T, Hirata O, Ishiguro H (2017) Theatrical approach: designing human-like behaviour in humanoid robots. Robot Autonom Syst 89:158–166

    Article  Google Scholar 

  4. Sanzari M, Ntouskos V, Pirri F (2019) Discovery and recognition of motion primitives in human activities. PLoS ONE 14(4):e0214499

    Article  Google Scholar 

  5. Okajima S, Tournier M, Alnajjar FS, Hayashibe M, Hasegawa Y, Shimoda S (2018) Generation of human-like movement from symbolized information. Frontiers in neurorobotics 12:43

    Article  Google Scholar 

  6. Tomić M, Jovanović K, Chevallereau C, Potkonjak V, Rodić A (2018) Toward optimal mapping of human dual-arm motion to humanoid motion for tasks involving contact with the environment. Int J Adv Rob Syst 15(1):1729881418757377

    Google Scholar 

  7. Beaudoin P, Coros S, van de Panne M, Poulin P (2008) Motion-motif graphs. In: Proceedings of the 2008 ACM SIGGRAPH/Eurographics symposium on computer animation. pp. 117-126

  8. Min J, Chai J (2012) Motion graphs++ a compact generative model for semantic motion analysis and synthesis. ACM Trans Graph 31(6):1–12

    Article  Google Scholar 

  9. Dong R, Cai D, Asai N (2017) Nonlinear dance motion analysis and motion editing using Hilbert-Huang transform. In: Proceedings of the computer graphics international conference (pp. 1-6)

  10. Dong R, Cai D, Ikuno S (2020) Motion capture data analysis in the instantaneous frequency-domain using hilbert-huang transform. Sensors 20(22):6534

    Article  Google Scholar 

  11. Wang H, Ho ES, Shum HP, Zhu Z (2019) Spatio-temporal manifold learning for human motions via long-horizon modeling. IEEE Trans Vis Comput Graph

  12. Alemi O, Françoise J, Pasquier P (2017) GrooveNet: Real-time music-driven dance movement generation using artificial neural networks. Networks 8(17):26

    Google Scholar 

  13. Holden D, Saito J, Komura, T, Joyce T (2015) Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs, pp. 1-4

  14. Holden D, Saito J, Komura T (2016) A deep learning framework for character motion synthesis and editing. ACM Trans Graph 35(4):1–11

    Article  Google Scholar 

  15. Holden D, Komura T, Saito J (2017) Phase-functioned neural networks for character control. ACM Trans Graph 36(4):1–13

    Article  Google Scholar 

  16. Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc London Ser A Math Phys Eng Sci 454(1971):903–995

    Article  MathSciNet  MATH  Google Scholar 

  17. Rilling G, Flandrin P, Gonçalves P, Lilly JM (2007) Bivariate empirical mode decomposition. IEEE Signal Process Lett 14(12):936–939

    Article  Google Scholar 

  18. Rehman N, Mandic DP (2009) Empirical mode decomposition for trivariate signals. IEEE Trans Signal Process 58(3):1059–1068

    Article  MathSciNet  MATH  Google Scholar 

  19. Rehman N, Mandic DP (2009) Multivariate empirical mode decomposition. Proc R Soc A Math Phys Eng Sci 466(2117):1291–1302

    MathSciNet  MATH  Google Scholar 

  20. Rehman N, Park C, Huang NE, Mandic DP (2013) EMD via MEMD: multivariate noise-aided computation of standard EMD. Adv Adapt Data Anal 5(02):1350007

    Article  MathSciNet  Google Scholar 

  21. Huang NE, Shen Z (2014) Hilbert-Huang transform and its applications, 400. World Scientific

  22. Bracewell RN (1986) The Fourier transform and its applications. McGraw-Hill, New York

    MATH  Google Scholar 

  23. PremiadAI - World-class dance communication robot - [Internet], DMM.com. Japanese. Available from: http://robots.dmm.com/robot/premaidai/spec

  24. Spong Mark W (2006) Seth Hutchinson, and Mathukumalli Vidyasagar, Robot modeling and control

  25. Tokyo Shimbun web. A performance of AI Robot and Hachioji’s Kuruma Ningyo Joruri. https://www.tokyo-np.co.jp/article/68132

  26. Neuronmocap. Perception neuron 2.0. https://neuronmocap.com/products/

  27. Rilling, G., Flandrin, P., and Goncalves, P. (2003, June). On empirical mode decomposition and its algorithms. In IEEE-EURASIP workshop on nonlinear signal and image processing. 3(3): 8–11. NSIP-03, Grado (I)

  28. Niu J, Liu Y, Jiang W, Li X, Kuang G (2012) Weighted average frequency algorithm for Hilbert-Huang spectrum and its application to micro-Doppler estimation. IET Radar Sonar Navig 6(7):595–602

    Article  Google Scholar 

  29. “KONDO Robot” KRS-2552RHV ICS, Available from: https://kondo-robot.com/product/03067e

  30. Winter DA (2009) Biomechanics and motor control of human movement. Wiley, Hoboken

    Book  Google Scholar 

  31. Xu, P., Ye, M., Li, X., Liu, Q., Yang, Y., and Ding, J. (2014, November). Dynamic background learning through deep auto-encoder networks. In: Proceedings of the 22nd ACM international conference on Multimedia, 107-116. (2014)

  32. Zhang Y, Liang X, Zhang D, Tan M, Xing E (2020) Unsupervised object-level video summarization with online motion auto-encoder. Pattern Recogn Lett 130:376–385

    Article  Google Scholar 

  33. Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In ICML

  34. Dong R, Chen Y, Cai D, Nakagawa S, Higaki T, Asai N (2020) Robot motion design using bunraku emotional expressions-focusing on Jo-Ha-Kyũ in sounds and movements. Adv Robot 34(5):299–312

    Article  Google Scholar 

  35. Holden, A deep learning framework for character motion synthesis and editing. http://theorangeduck.com/page/deep-learning-framework-character-motion-synthesis-and-editing

  36. CMU. Carnegie-mellon mocap database. http://mocap.cs.cmu.edu/

  37. Xia S, Wang C, Chai J, Hodgins J (2015) Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans Graph 34(4):119:1-119:10

    Article  Google Scholar 

  38. Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R (2013) Berkeley mhad: a comprehensive multimodal human action database. Appl Comput Vis. 2013 IEEE Workshop on, 53–60

  39. Müller M, Röder T, Clausen, M, EberhardT B, Krüger B, Weber A (2007) Documentation mocap database hdm05. Tech. Rep. CG-2007-2, Universität Bonn, June

  40. Robotyuenchi. PremaidAI RCB version dance song list and dance data. https://robotyuenchi.com/dans.html

Download references

Acknowledgements

This work was supported by JSPS KAKENHI Grant Number JP20K23352 and the Sasakawa Scientific Research Grant from The Japan Science Society.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiong Chang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 131513 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, R., Chang, Q. & Ikuno, S. A deep learning framework for realistic robot motion generation. Neural Comput & Applic 35, 23343–23356 (2023). https://doi.org/10.1007/s00521-021-06192-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06192-3

Keywords