Skip to main content
Log in

Multimodal Information Fusion for Automatic Aesthetics Evaluation of Robotic Dance Poses

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Aesthetic ability is an advanced cognitive function of human beings. Human dancers in front of mirrors estimate the aesthetics of their own dance poses by fusing multimodal information (visual and non-visual) to improve their dancing performances. Similarly, if a robot could perceive the aesthetics of its own dance poses, the robot could demonstrate more autonomous and humanoid behavior during robotic dance creation. Therefore, we propose a novel automatic approach to estimate the aesthetics of robotic dance poses by fusing multimodal information. From the visual channel, the shape features (including eccentricity, density, rectangularity, aspect ratio, Hu-moment Invariants, and complex coordinate based Fourier descriptors) are extracted from an image; from the non-visual channel, joint motion features are obtained from the internal kinestate of a robot. The above two categories of features are fused to portray completely a robotic dance pose. To automatically estimate the aesthetics of robotic dance poses, the following ten machine learning methods are deployed: Naive Bayes, Bayesian logistic regression, SVM, RBF network, ADTree, random forest, voted perceptron, KStar, DTNB, and bagging. Experimental results show the feasibility and good performance of the proposed mechanism, which was implemented in a simulated robot environment. The highest correct ratio of aesthetic evaluation is 81.6%, which comes from the ADTree, based on the above mixed features (joint + shape).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Aucouturier JJ (2008) Cheek to chip: dancing robots and AI’s future. Intell Syst 23(2):74–84

    Article  Google Scholar 

  2. Peng H, Zhou C, Hu H, Chao F, Li J (2015) Robotic dance in social robotics—a taxonomy. IEEE Trans Hum-Mach Syst 45(3):281–293

    Article  Google Scholar 

  3. Or J (2009) Towards the development of emotional dancing humanoid robots. Int J Soc Robot 1(4):367–382

    Article  Google Scholar 

  4. Jeon M (2017) Robotic arts: current practices, potentials, and implications. Multimodal Technol Interact 1(2):5

    Article  Google Scholar 

  5. Shiratori T, Ikeuchi K (2008) Synthesis of dance performance based on analyses of human motion and music. Inf Media Technol 3(4):834–847

    Google Scholar 

  6. Santiago CB, Oliveira JL, Reis LP, Sousa A (2011) Autonomous robot dancing synchronized to musical rhythmic stimuli. In: 2011 6th Iberian conference on information systems and technologies (CISTI 2011), pp 1–6

  7. Meng Q, Tholley I, Chung PWH (2014) Robots learn to dance through interaction with humans. Neural Comput Appl 24(1):117–124

    Article  Google Scholar 

  8. Vircikova M, Sincak P (2010) Dance choreography design of humanoid robots using interactive evolutionary computation. In: 3rd workshop for young researchers on human-friendly robotics (HFR 2010)

  9. Vircikova M, Sincak P (2010) Artificial intelligence in humanoid systems, FEI TU of Kosice

  10. Vircikova M, Sincak P (2011) Discovering art in robotic motion: from imitation to innovation via interactive evolution. In: Kim T, Adeli H, Robles RJ, Balitanas M (eds) International conference on ubiquitous computing and multimedia applications (UCMA), vol 150. Springer, Heidelberg, pp 183–190

  11. Peng H, Hu H, Chao F, Zhou C, Li J (2016) Autonomous robotic choreography creation via semi-interactive evolutionary computation. Int J Soc Robot 8(5):649–661

    Article  Google Scholar 

  12. Eaton M (2013) An approach to the synthesis of humanoid robot dance using non-interactive evolutionary techniques. In: 2013 IEEE international conference on systems, man, and cybernetics (SMC), pp 3305–3309

  13. Shinozaki K, Iwatani A, Nakatsu R (2008) Construction and evaluation of a robot dance system. In: New frontiers for entertainment computing, Milano, Italy, vol 279. Springer, New York, pp 83–94

  14. Oliveira JL, Reis LP, Faria BM (2012) An empiric evaluation of a real-time robot dancing framework based on multi-modal events. TELKOMNIKA Indones J Electr Eng 10(8):1917–1928

    Google Scholar 

  15. Manfrè A, Infantino I, Vella F, Gaglio S (2016) An automatic system for humanoid dance creation. Biol Inspired Cogn Archit 15:1–9

    Google Scholar 

  16. Augello A, Infantino I, Manfrè A, Pilato G, Vella F, Chella A (2016) Creation and cognition for humanoid live dancing. Robot Auton Syst 86:128–137

    Article  Google Scholar 

  17. Manfré A, Infantino I, Augello A, Pilato G, Vella F (2017) Learning by demonstration for a dancing robot within a computational creativity framework. In: Proceedings—2017 1st IEEE international conference on robotic computing, IRC 2017, pp 434–439

  18. Qin R, Zhou C, Zhu H, Shi M, Chao F, Li N (2018) A music-driven dance system of humanoid robots. Int J Humanoid Robot 15(5):1850023

    Article  Google Scholar 

  19. Krasnow D, Chatfield SJ (2009) Development of the ‘performance competence evaluation measure’ assessing qualitative aspects of dance performance. J Dance Med Sci 13(4):101–107

    Google Scholar 

  20. Tutsoy O, Gongor F (2017) Analysis of facial characteristics. In: International conference on technology, engineering and science (IConTES), pp 262–272

  21. Tutsoy O, Gongor F, Barkana DE, Kose H (2017) An emotion analysis algorithm and implementation to NAO humanoid robot. In: international conference on technology, engineering and science (IConTES), pp 316–330

  22. Gongor F, Tutsoy O, Barkana DE, Colak S (2017) Sit-to-stand motion analysis for NAO humanoid robot. In: International conference on innovation trends in multidisciplinary academic research (ITMAR)

  23. Gongor F, Tutsoy O, Colak S (2017) Development and implementation of a sit-to-stand motion algorithm for humanoid robots. J Adv Technol Eng Res 3(6):245–256

    Google Scholar 

  24. Rother C, Kolmogorov V, Blake A (2004) ‘GrabCut’: interactive foreground extraction using iterated graph cuts. ACM Trans Graph 23(3):309–314

    Article  Google Scholar 

  25. Kauppinen H, Seppanen T, Pietikainen M (1995) An experimental comparison of autoregressive and Fourier-based descriptors in 2-D shape classification. IEEE Trans Pattern Anal Mach Intell 17(2):201–207

    Article  Google Scholar 

  26. Freund Y, Mason L (1999) The alternating decision tree learning algorithm. In: Proceeding of the sixteenth international conference on machine learning, pp 124–133

  27. Muehlenbein MP (2010) Human evolutionary biology. Cambridge University Press, Cambridge

    Book  Google Scholar 

  28. Gazzaniga MS, Ivry RB, Mangun GR (2013) Cognitive neuroscience: the biology of the mind, 4th edn. W. W. Norton & Company, New York

    Google Scholar 

  29. Holmes NP, Spence C (2005) Multisensory integration: space, time and superadditivity. Curr Biol 15(18):R762–R764

    Article  Google Scholar 

Download references

Funding

This work was supported by National Natural Science Foundation of China (Grant Nos. 61662025, 61806172), and the Research Foundation of Philosophy and Social Science of Hunan Province (Grant No. 16YBX042), the Research Foundation of Education Bureau of Hunan Province (Grant No. 16C1311), and the Startup Project of Doctor Scientific Research of Shaoxing University (Grant No. 20185003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hua Peng.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Peng, H., Hu, H. et al. Multimodal Information Fusion for Automatic Aesthetics Evaluation of Robotic Dance Poses. Int J of Soc Robotics 12, 5–20 (2020). https://doi.org/10.1007/s12369-019-00535-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-019-00535-w

Keywords

Navigation