Abstract
Humanoid robots are being developed to play the role of personal assistants. With the development of artificial intelligence technology, humanoid robots are expected to perform many human tasks, such as housework, human care, and even medical treatment. However, robots cannot currently move flexibly like humans, which affects their fine motor skill performance. This is primarily because traditional robot control methods use manipulators that are difficult to articulate well. To solve this problem, we propose a nonlinear realistic robot motion generation method based on deep learning. Our method benefits from decomposing human motions into basic motions and realistic motions using the multivariate empirical mode decomposition and learning the biomechanical relationships between them by using an autoencoder generation network. The experimental results show that realistic motion features can be learned by the generation network and motion realism can be increased by adding the learned motions to the robots.















Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Ding M, Ikeura R, Mori Y, Mukai T, Hosoe S (2013) Measurement of human body stiffness for lifting-up motion generation using nursing-care assistant robot–RIBA. In Sensors, IEEE. 1–4
Borovac B, Gnjatović M, Savić S, Raković M, Nikolić M (2016) Human-like robot marko in the rehabilitation of children with cerebral palsy. New Trend Med Service Robots. 191–203. Springer, Cham
Nishiguchi S, Ogawa K, Yoshikawa Y, Chikaraishi T, Hirata O, Ishiguro H (2017) Theatrical approach: designing human-like behaviour in humanoid robots. Robot Autonom Syst 89:158–166
Sanzari M, Ntouskos V, Pirri F (2019) Discovery and recognition of motion primitives in human activities. PLoS ONE 14(4):e0214499
Okajima S, Tournier M, Alnajjar FS, Hayashibe M, Hasegawa Y, Shimoda S (2018) Generation of human-like movement from symbolized information. Frontiers in neurorobotics 12:43
Tomić M, Jovanović K, Chevallereau C, Potkonjak V, Rodić A (2018) Toward optimal mapping of human dual-arm motion to humanoid motion for tasks involving contact with the environment. Int J Adv Rob Syst 15(1):1729881418757377
Beaudoin P, Coros S, van de Panne M, Poulin P (2008) Motion-motif graphs. In: Proceedings of the 2008 ACM SIGGRAPH/Eurographics symposium on computer animation. pp. 117-126
Min J, Chai J (2012) Motion graphs++ a compact generative model for semantic motion analysis and synthesis. ACM Trans Graph 31(6):1–12
Dong R, Cai D, Asai N (2017) Nonlinear dance motion analysis and motion editing using Hilbert-Huang transform. In: Proceedings of the computer graphics international conference (pp. 1-6)
Dong R, Cai D, Ikuno S (2020) Motion capture data analysis in the instantaneous frequency-domain using hilbert-huang transform. Sensors 20(22):6534
Wang H, Ho ES, Shum HP, Zhu Z (2019) Spatio-temporal manifold learning for human motions via long-horizon modeling. IEEE Trans Vis Comput Graph
Alemi O, Françoise J, Pasquier P (2017) GrooveNet: Real-time music-driven dance movement generation using artificial neural networks. Networks 8(17):26
Holden D, Saito J, Komura, T, Joyce T (2015) Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs, pp. 1-4
Holden D, Saito J, Komura T (2016) A deep learning framework for character motion synthesis and editing. ACM Trans Graph 35(4):1–11
Holden D, Komura T, Saito J (2017) Phase-functioned neural networks for character control. ACM Trans Graph 36(4):1–13
Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc London Ser A Math Phys Eng Sci 454(1971):903–995
Rilling G, Flandrin P, Gonçalves P, Lilly JM (2007) Bivariate empirical mode decomposition. IEEE Signal Process Lett 14(12):936–939
Rehman N, Mandic DP (2009) Empirical mode decomposition for trivariate signals. IEEE Trans Signal Process 58(3):1059–1068
Rehman N, Mandic DP (2009) Multivariate empirical mode decomposition. Proc R Soc A Math Phys Eng Sci 466(2117):1291–1302
Rehman N, Park C, Huang NE, Mandic DP (2013) EMD via MEMD: multivariate noise-aided computation of standard EMD. Adv Adapt Data Anal 5(02):1350007
Huang NE, Shen Z (2014) Hilbert-Huang transform and its applications, 400. World Scientific
Bracewell RN (1986) The Fourier transform and its applications. McGraw-Hill, New York
PremiadAI - World-class dance communication robot - [Internet], DMM.com. Japanese. Available from: http://robots.dmm.com/robot/premaidai/spec
Spong Mark W (2006) Seth Hutchinson, and Mathukumalli Vidyasagar, Robot modeling and control
Tokyo Shimbun web. A performance of AI Robot and Hachioji’s Kuruma Ningyo Joruri. https://www.tokyo-np.co.jp/article/68132
Neuronmocap. Perception neuron 2.0. https://neuronmocap.com/products/
Rilling, G., Flandrin, P., and Goncalves, P. (2003, June). On empirical mode decomposition and its algorithms. In IEEE-EURASIP workshop on nonlinear signal and image processing. 3(3): 8–11. NSIP-03, Grado (I)
Niu J, Liu Y, Jiang W, Li X, Kuang G (2012) Weighted average frequency algorithm for Hilbert-Huang spectrum and its application to micro-Doppler estimation. IET Radar Sonar Navig 6(7):595–602
“KONDO Robot” KRS-2552RHV ICS, Available from: https://kondo-robot.com/product/03067e
Winter DA (2009) Biomechanics and motor control of human movement. Wiley, Hoboken
Xu, P., Ye, M., Li, X., Liu, Q., Yang, Y., and Ding, J. (2014, November). Dynamic background learning through deep auto-encoder networks. In: Proceedings of the 22nd ACM international conference on Multimedia, 107-116. (2014)
Zhang Y, Liang X, Zhang D, Tan M, Xing E (2020) Unsupervised object-level video summarization with online motion auto-encoder. Pattern Recogn Lett 130:376–385
Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In ICML
Dong R, Chen Y, Cai D, Nakagawa S, Higaki T, Asai N (2020) Robot motion design using bunraku emotional expressions-focusing on Jo-Ha-Kyũ in sounds and movements. Adv Robot 34(5):299–312
Holden, A deep learning framework for character motion synthesis and editing. http://theorangeduck.com/page/deep-learning-framework-character-motion-synthesis-and-editing
CMU. Carnegie-mellon mocap database. http://mocap.cs.cmu.edu/
Xia S, Wang C, Chai J, Hodgins J (2015) Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans Graph 34(4):119:1-119:10
Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R (2013) Berkeley mhad: a comprehensive multimodal human action database. Appl Comput Vis. 2013 IEEE Workshop on, 53–60
Müller M, Röder T, Clausen, M, EberhardT B, Krüger B, Weber A (2007) Documentation mocap database hdm05. Tech. Rep. CG-2007-2, Universität Bonn, June
Robotyuenchi. PremaidAI RCB version dance song list and dance data. https://robotyuenchi.com/dans.html
Acknowledgements
This work was supported by JSPS KAKENHI Grant Number JP20K23352 and the Sasakawa Scientific Research Grant from The Japan Science Society.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 131513 KB)
Rights and permissions
About this article
Cite this article
Dong, R., Chang, Q. & Ikuno, S. A deep learning framework for realistic robot motion generation. Neural Comput & Applic 35, 23343–23356 (2023). https://doi.org/10.1007/s00521-021-06192-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-06192-3