Skip to main content
Log in

Motion rank: applying page rank to motion data search

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

As the uses of motion capture data increase, the amount of available motion data on the web also grows. In this paper, we investigate a new method to retrieve and visualize motion data in a similar manner to Google Image Search. The main idea is to represent raw motion data into a series of short animated clip arts, called motion clip arts. Short animated clip arts can be quickly browsed and understood by people even though many of them appear at the same time on the screen. We first temporally segment the raw motion data files into short yet semantically meaningful motion segments. Then, we convert the motion segments into motion clip arts in a way that emphasizes the main motion and minimizes the data size for the efficient transmitting and processing on the web. When a user input query is received, our system first retrieves all the relevant motion clip arts by considering the input keywords and similarity between motions. Then, the retrieved results are re-ranked by our ranking algorithm developed based on the Google ImageRank algorithm. To prove the usability of our method, we build a web-based motion search system with the entire data collections of the CMU motion database. The experimental results show significant improvement, in terms of relevancy, in comparison with the simple keyword-based search interface.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Assa, J., Caspi, Y., Cohen-Or, D.: Action synopsis: pose selection and illustration. ACM T Graphic (SIGGRAPH 2005) pp. 667–676 (2005)

  2. Barbič, J., Safonova, A., Pan, J.Y., Faloutsos, C., Hodgins, J.K., Pollard, N.S.: Segmenting motion capture data into distinct behaviors. In: Proceedings of Graphics Interface 2004, pp. 185–194. Canadian Human-Computer Communications Society (2004)

  3. Beaudoin, P., Coros, S., van de Panne, M., Poulin, P.: Motion-motif graphs. In: Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 117–126. Eurographics Association (2008)

  4. Brin, S., Page, L.: Reprint of: the anatomy of a large-scale hypertextual web search engine. Comput. Netw. 56(18), 3825–3833 (2012)

    Article  Google Scholar 

  5. Choi, M.G., Noh, S.T., Komura, T., Igarashi, T.: Dynamic comics for hierarchical abstraction of 3d animation data. Comput. Graph. Forum 32(7), 1–9 (2013)

    Article  Google Scholar 

  6. Choi, M.G., Yang, K., Igarashi, T., Mitani, J., Lee, J.: Retrieval and visualization of human motion data via stick figures. Comput. Graph. Forum 31(7), 2057–2065 (2012)

    Article  Google Scholar 

  7. CMU-Graphics-Lab: CMU motion capture database. http://mocap.cs.cmu.edu/. Accessed 1 Aug 2017

  8. Daras, P., Axenopoulos, A.: A 3d shape retrieval framework supporting multimodal queries. Int. J. Comput. Vis. 89(2–3), 229–247 (2010)

    Article  Google Scholar 

  9. Deng, Z., Gu, Q., Li, Q.: Perceptually consistent example-based human motion retrieval. In: Proceedings of the 2009 symposium on Interactive 3D graphics and games, I3D ’09, pp. 191–198 (2009)

  10. Eitz, M., Richter, R., Boubekeur, T., Hildebrand, K., Alexa, M.: Sketch-based shape retrieval. ACM Trans. Graph. 31(4), 31–1 (2012)

    Google Scholar 

  11. Hahne, B.: Bvh conversions of Carnegie-Mellon motion capture dataset. https://sites.google.com/a/cgspeed.com/cgspeed/motion-capture/. Accessed 1 Aug 2017

  12. Hsu, W.H., Kennedy, L.S., Chang, S.F.: Video search reranking through random walk over document-level context graph. In: Proceedings of the 15th ACM international conference on Multimedia, pp. 971–980. ACM (2007)

  13. Huang, S.S., Shamir, A., Shen, C.H., Zhang, H., Sheffer, A., Hu, S.M., Cohen-Or, D.: Qualitative organization of collections of shapes via quartet analysis. ACM Trans. Graph. (TOG) 32(4), 71 (2013)

    MATH  Google Scholar 

  14. Jang, S., Elmqvist, N., Ramani, K.: Motionflow: visual abstraction and aggregation of sequential patterns in human motion tracking data. IEEE Trans. Visual Comput. Graph. 22(1), 21–30 (2016)

    Article  Google Scholar 

  15. Jing, Y., Baluja, S.: Pagerank for product image search. In: Proceedings of the 17th International Conference on World Wide Web, pp. 307–316. ACM (2008)

  16. Jing, Y., Baluja, S.: Visualrank: applying pagerank to large-scale image search. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 1877–1890 (2008)

    Article  Google Scholar 

  17. Kapadia, M., Chiang, I.K., Thomas, T., Badler, N.I., Kider Jr., J.T., et al.: Efficient motion retrieval in large motion databases. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 19–28. ACM (2013)

  18. Kleinberg, J.M.: Authoritative sources in a hyperlinked environment. J. ACM (JACM) 46(5), 604–632 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: ACM Transactions on Graphics (TOG), vol. 21, pp. 473–482. ACM (2002)

  20. Krüger, B., Tautges, J., Weber, A., Zinke, A.: Fast local and global similarity searches in large motion capture databases. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 1–10 (2010)

  21. Kwon, T., Shin, S.Y.: A steering model for on-line locomotion synthesis. Comput. Anim. Virtual Worlds 18(4–5), 463–472 (2007)

    Article  Google Scholar 

  22. Lan, R., Sun, H.: Automated human motion segmentation via motion regularities. Vis. Comput. 31(1), 35–53 (2015)

    Article  Google Scholar 

  23. Laraba, S., Brahimi, M., Tilmanne, J., Dutoit, T.: 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images. Comput. Anim. Virtual Worlds 28(3–4), e1782 (2017). https://doi.org/10.1002/cav.1782

  24. Li, M., Leung, H.: Graph-based representation learning for automatic human motion segmentation. Multimed. Tools Appl. 75(15), 9205–9224 (2016)

    Article  Google Scholar 

  25. Max-Planck-Institut: Motion capture database HDM05. (2005) http://www.mpi-inf.mpg.de/resources/HDM05/. Accessed 1 Aug 2017

  26. MoCap-Online: MoCap online. http://www.motioncaptureonline.com. Accessed 1 Aug 2017

  27. Müller, M., Baak, A., Seidel, H.P.: Efficient and robust annotation of motion capture data. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 17–26 (2009)

  28. Müller, M., Röder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 137–146. Eurographics Association (2006)

  29. Müller, M., Röder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. In: ACM Transactions on Graphics (TOG), vol. 24, pp. 677–685. ACM (2005)

  30. Numaguchi, N., Nakazawa, A., Shiratori, T., Hodgins, J.K.: A puppet interface for retrieval of motion capture data. In: Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 157–166. ACM (2011)

  31. Page, L., Brin, S., Motwani, R.,Winograd, T.: The PageRank citation ranking: bring order to the web. Technical Report, Stanford University (1998)

  32. Quint, A.: Scalable vector graphics. IEEE MultiMedia 10(3), 99–102 (2003)

    Article  MathSciNet  Google Scholar 

  33. Ruxanda, M.M., Nanopoulos, A., Jensen, C.S., Manolopoulos, Y.: Ranking music data by relevance and importance. In: 2008 IEEE International Conference on Multimedia and Expo, pp. 549–552. IEEE (2008)

  34. Sedmidubsky, J., Elias, P., Zezula, P.: Effective and efficient similarity searching in motion capture data. Multimed. Tools Appl. 1–22 (2017). https://doi.org/10.1007/s11042-017-4859-7

  35. Sedmidubsky, J., Zezula, P., Svec, J.: Fast subsequence matching in motion capture data. In: Advances in Databases and Information Systems, pp. 59–72 (2017)

  36. SNU-Movement-Research-Lab: SNU motion database. (2009) http://mrl.snu.ac.kr/ mdb/. Accessed 1 Aug 2017

  37. Vögele, A., Krüger, B., Klein, R.: Efficient unsupervised temporal segmentation of human motion. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 167–176 (2014)

  38. Wang, X., Chen, L., Jing, J., Zheng, H.: Human motion capture data retrieval based on semantic thumbnail. Multimed. Tools Appl. 75(19), 11723–11740 (2016)

    Article  Google Scholar 

  39. Wang, Y., Neff, M.: Deep signatures for indexing and retrieval in large motion databases. In: Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games, pp. 37–45. ACM (2015)

  40. Won, J., Lee, K., O’Sullivan, C., Hodgins, J.K., Lee, J.: Generating and ranking diverse multi-character interactions. ACM Trans. Graph. (TOG) 33(6), 219 (2014)

    Article  Google Scholar 

  41. Wood, L.: Graphics interchange format (sm) version 89a. Graph. Interchange Format Program. Ref. 1(1), 1–34 (1990)

    MathSciNet  Google Scholar 

  42. Wu, X., Tournier, M., Reveret, L.: Natural character posing from a large motion database. IEEE Comput. Graph. Appl. 31(3), 69–77 (2011)

    Article  Google Scholar 

  43. Xu, K., Chen, K., Fu, H., Sun, W.L., Hu, S.M.: Sketch2scene: sketch-based co-retrieval and co-placement of 3d models. ACM Trans. Graph. (TOG) 32(4), 123 (2013)

    Article  Google Scholar 

  44. Yang, C.C., Chan, K.: Retrieving multimedia web objects based on pagerank algorithm. In: Special Interest Tracks and Posters of the 14th International Conference on World Wide Web, pp. 906–907. ACM (2005)

  45. Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., Benes, B.: Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30(2), 213–227 (2014)

    Article  Google Scholar 

  46. Yoshizaki, W., Sugiura, Y., Chiou, A.C., Hashimoto, S., Inami, M., Igarashi, T., Akazawa, Y., Kawachi, K., Kagami, S., Mochimaru, M.: An actuated physical puppet as an input device for controlling a digital manikin. In: 2011 Annual Conference on Human Factors in Computing Systems, pp. 637–646 (2011)

  47. Zhou, F., De la Torre, F., Hodgins, J.K.: Aligned cluster analysis for temporal segmentation of human motion. In: 8th IEEE International Conference on Automatic Face and Gesture Recognition, 2008. FG’08, pp. 1–7. IEEE (2008)

Download references

Acknowledgements

We thank the anonymous reviewers for their comments and suggestions. This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1D1A1B03930472) and partially supported by the NRF of Korea (NRF-MIAXA003-2010-0029744). This work was also supported by the Catholic University of Korea, Research Fund, 2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taesoo Kwon.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 2264 KB)

Supplementary material 2 (mp4 52844 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, M.G., Kwon, T. Motion rank: applying page rank to motion data search. Vis Comput 35, 289–300 (2019). https://doi.org/10.1007/s00371-018-1498-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1498-6

Keywords

Navigation