Skip to main content
Log in

Animating with style: defining expressive semantics of motion

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Actions performed by a virtual character can be controlled with verbal commands such as ‘walk five steps forward’. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as ‘aggressive walking’. In this paper, we present a method for controlling motion style with relative commands such as ‘do the same, but more sadly’. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Aviezer, H., Hassin, R.R., Ryan, J., Grady, C., Susskind, J., Anderson, A., Moscovitch, M., Bentin, S.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)

    Article  Google Scholar 

  2. Bruderlin, A., Williams, L.: Motion signal processing. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 97–104. ACM, New York (1995)

  3. Chi, D., Costa, M., Zhao, L., Badler, N.: The emote model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘00, pp. 173–182. ACM Press/Addison-Wesley Publishing Co., New York (2000)

  4. Cho, K., Chen, X.: Classifying and visualizing motion capture sequences using deep neural networks. In: Proceedings of the 9th International Conference on Computer Vision Theory and Applications, VISAPP2014. SciTePress (2014)

  5. Clavel, C., Plessier, J., Martin, J.C., Ach, L., Morel, B.: Combining facial and postural expressions of emotions in a virtual character. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 5773, pp. 287–300. Springer, Berlin (2009)

  6. Förger, K., Honkela, T., Takala, T.: Impact of varying vocabularies on controlling motion of a virtual actor. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 8108, pp. 239–248. Springer, Berlin (2013)

  7. Gleicher, M.: Retargetting motion to new characters. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘98, pp. 33–42. ACM, New York (1998)

  8. Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)

    Article  Google Scholar 

  9. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ‘02, pp. 133–142. ACM, New York (2002)

  10. Johnson, K.L., McKay, L.S., Pollick, F.E.: He throws like a girl (but only when hes sad): emotion affects sex-decoding of biological motion displays. Cognition 119(2), 265–280 (2011)

    Article  Google Scholar 

  11. Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: a survey. IEEE Trans. Affect. Comput. 4(1), 15–33 (2013)

    Article  Google Scholar 

  12. Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘02, pp. 473–482. ACM, New York (2002)

  13. Lawrence, N.: Mocap toolbox for matlab. Available on-line at http://staffwww.dcs.shef.ac.uk/people/N.Lawrence/mocap/ (2011). Accessed 9 Feb 2015

  14. Min, J., Chai, J.: Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Trans. Graph. 31(6), 153:1–153:12 (2012)

    Article  Google Scholar 

  15. Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. In: ACM SIGGRAPH 2005 Papers. SIGGRAPH ‘05, pp. 1062–1070. ACM, New York (2005)

  16. Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput. 28(6), 976–990 (2010)

    Article  Google Scholar 

  17. Rose, C., Cohen, M., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–40 (1998)

    Article  Google Scholar 

  18. Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Proceedings of Graphics Interface 2006, pp. 33–39. Canadian Information Processing Society, Toronto, Canada (2006)

  19. Shoemake, K.: Animating rotation with quaternion curves. SIGGRAPH Comput. Graph. 19(3), 245–254 (1985)

    Article  Google Scholar 

  20. Troje, N.F.: Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J. Vis. 2(5), 371–387 (2002)

    Article  Google Scholar 

  21. Troje, N.F.: Retrieving information from human movement patterns. In: Shipley, T.F., Zacks, M. (eds.) Understanding Events: How Humans See, Represent, and Act on Events, pp. 308–334. Oxford University Press, New York (2008)

  22. Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 91–96. ACM, New York (1995)

  23. Urtasun, R., Glardon, P., Boulic, R., Thalmann, D., Fua, P.: Style-based motion synthesis. Comput. Graph. Forum 23(4), 799–812 (2004)

    Article  Google Scholar 

  24. Wang, X., Jia, J., Cai, L.: Affective image adjustment with a single word. Vis. Comput. 29(11), 1121–1133 (2013)

    Article  Google Scholar 

  25. Wu, J., Hu, D., Chen, F.: Action recognition by hidden temporal models. Vis. Comput. 30(12), 1395–1404 (2014)

    Article  Google Scholar 

  26. Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., Benes, B.: Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30(2), 213–227 (2014)

    Article  Google Scholar 

  27. Zhuang, Y., Pan, Y., Xiao, J.: A modern approach to intelligent animation: theory and practice. In: Chapter Automatic Synthesis and Editing of Motion Styles, pp. 255–265. Springer, Berlin (2008)

Download references

Acknowledgments

This work has been supported by the HeCSE graduate school and the project Multimodally grounded language technology (254104) funded by the Academy of Finland. The Mocap toolbox by Neil Lawrence [13] was used in this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Klaus Förger.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 23047 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Förger, K., Takala, T. Animating with style: defining expressive semantics of motion. Vis Comput 32, 191–203 (2016). https://doi.org/10.1007/s00371-015-1064-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-015-1064-4

Keywords

Navigation