Skip to main content

Annotating Movement Phrases in Vietnamese Folk Dance Videos

  • Conference paper
  • First Online:
Advances in Artificial Intelligence: From Theory to Practice (IEA/AIE 2017)

Abstract

This paper aims at the annotation of movement phrases in Vietnamese folk dance videos that were mainly gathered, stored and used in teaching at art schools and in preserving cultural intangible heritages (performed by different famous folk dance masters). We propose a framework of automatic movement phrase annotation, in which the motion vectors are used as movement phrase features. Movement phrase classification can be carried out, based on dancer’s trajectories. A deep investigation of Vietnamese folk dance gives an idea of using optical flow as movement phrase features in movement phrase detection and classification. For the richness and usefulness in annotation of Vietnamese folk dance, a lookup table of movement phrase descriptions is defined. In initial experiments, a sample movement phrase dataset is built up to train k-NN classification model. Experiments have shown the effectiveness of the proposed framework of automatic movement phrase annotation with classification accuracy at least 88%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. LaViers, A., Bai, L., Bashiri, M., Heddy, G., Sheng, Y.: Abstractions for design-by-humans of Heterogeneous Behaviors. In: Laumond, J.-P., Abe, N. (eds.) Dance Notations and Robot Motion. Springer Tracts in Advanced Robotics, vol. 111, pp. 237–262. Springer, Switzerland (2015)

    Google Scholar 

  2. Patel, D.H.: Content based video retrieval: a survey. Int. J. Comput. Appl. 109(13), January 2015

    Google Scholar 

  3. Jeong, J.-W., Hong, H.-K., Lee, D.-H.: Ontology-based automatic video annotation technique in smart TV environment. IEEE Trans. Consum. Electron. 57(4), 1830–1836 (2011)

    Article  Google Scholar 

  4. El Raheb, K., Ioannidis, Y.: A labanotation based ontology for representing dance movement. In: Proceedings of the 9th International Gesture Workshop (2011)

    Google Scholar 

  5. Bai, L., Lao, S.-Y., Liu, H.-T., Bu, J.: Video shot boundary detection using Petrinet. In: 2008 International Conference on Machine Learning and Cybernetics, vol. 5, pp. 3047–3051. IEEE (2008)

    Google Scholar 

  6. Cooper, M., Liu, T., Rieffel, E.: Video segmentation via temporal pattern classification. IEEE Trans. Multimedia 9(3), 610–618 (2007)

    Article  Google Scholar 

  7. Neagle, R.J.: Emotion by motion: expression simulation in Virtual Ballet. Thesis of Doctor of Philosophy. The University Of Leeds School of computing. United Kingdom (2005)

    Google Scholar 

  8. Saad, S., De Beul, D., Mahmoudi, S., Manneback, P.: An ontology for video human movement representation based on benesh notation. In: International Conference on Multimedia Computing and Systems (2012)

    Google Scholar 

  9. Chantamunee, S., Gotoh, Y.: University of Sheffield at trecvid 2007: Shot boundary detection and rushes summarization. In: TRECVID. Citeseer (2007)

    Google Scholar 

  10. Hoi, S.C., Wong, L.L., Lyu, A.: Chinese university of hongkong at trecvid 2006: shot boundary detection and video search. In: TRECVid 2006 Workshop, pp. 76–86 (2006)

    Google Scholar 

  11. Porter, S.V.: Video segmentation and indexing using motion estimation. Ph.D. dissertation, University of Bristol (2004)

    Google Scholar 

  12. Nakano, T., Kimura, A.: Automatic video annotation via hierarchical topic trajectory model considering cross-modal correlations. In: IEEE 2011 (2011)

    Google Scholar 

  13. Ngoc, T.T.: CHEO dance curriculum. Hanoi Academy of Theatre and Cinema (1998)

    Google Scholar 

  14. Zhang, T., Xu, C., Zhu, G.: A generic framework for video annotation via semi-supervised learning. IEEE Trans. Multimedia 14(4), 1206–1219 (2012)

    Article  Google Scholar 

  15. Zhu, X., Fan, J., Xue, X., Wu, L., Elmagarmid, A.K.: Semi-automatic video content annotation. In: Proceeding of Third IEEE Pacific Rim Conference on Multimedia, pp. 37–52 (2008)

    Google Scholar 

  16. Gao, X., Li, J., Shi, Y.: A video shot boundary detection algorithm based on feature tracking. In: Wang, G.-Y., Peters, J.F., Skowron, A., Yao, Y. (eds.) RSKT 2006. LNCS, vol. 4062, pp. 651–658. Springer, Heidelberg (2006). doi:10.1007/11795131_95

    Chapter  Google Scholar 

  17. Wu, X., Yuen, P.C., Liu, C., Huang, J., Detection, S.B.: An information saliency approach. In: 2008 Congress on Image and Signal Processing, pp. 808–812 (2008)

    Google Scholar 

  18. Zhao, X., Lin, K.-H., Yun, F., Corners, T.F.: A novel approach to detect text and caption in videos. IEEE Trans. Image Process. 20(3), 2296–2305 (2011)

    Google Scholar 

  19. Chang, Y., Lee, D.J., Hong, Y., Archibald, J.: Unsupervised video shot detection using clustering ensemble with a color global scale invariant feature transform descriptor. EURASIP J. Image Video Process. 2008, 1–10 (2008)

    Article  Google Scholar 

  20. Jiang, Y.G., Dai, Q., Wang, J., Ngo, C.W.: Fast semantic Diffusion for large scale context based image and video annotation. IEEE Trans. Image Process. 21(6), 3080–3091 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work has received support from the European project H2020 Marie Sklodowska-Curie Actions (MSCA) research and Innovation Staff Exchange (RISE): AniAge (High Dimensional Heterogeneous Data based Animation Techniques for Southeast Asian Intangible Cultural Heritage Digital Con-tent), project number 691215.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chau Ma-Thi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Ma-Thi, C., Tabia, K., Lagrue, S., Le-Thanh, H., Bui-The, D., Nguyen-Thanh, T. (2017). Annotating Movement Phrases in Vietnamese Folk Dance Videos. In: Benferhat, S., Tabia, K., Ali, M. (eds) Advances in Artificial Intelligence: From Theory to Practice. IEA/AIE 2017. Lecture Notes in Computer Science(), vol 10351. Springer, Cham. https://doi.org/10.1007/978-3-319-60045-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-60045-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-60044-4

  • Online ISBN: 978-3-319-60045-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics