Skip to main content

Feature Comparison and Feature Fusion for Traditional Dances Recognition

  • Conference paper

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 383))

Abstract

Traditional dances constitute a significant part of the cultural heritage around the world. The great variety of traditional dances along with the complexity of some dances increases the difficulty of identifying such dances, thus making the traditional dance recognition a challenging subset within the general field of activity recognition. In this paper, three types of features are extracted to represent traditional dance video sequences and a bag of words approach is used to perform activity recognition in a dataset that consist of Greek traditional dances. Each type of features is compared in a stand alone manner in terms of recognition accuracy whereas a fusion approach is also investigated. Features extracted through the training of a neural network as well as fusion of all three types of features achieved the highest classification rate.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Poppe, R.: A survey on vision-based human action recognition. Image Vision Comput. 28, 976–990 (2010)

    Article  Google Scholar 

  2. Turaga, P., Chellappa, R., Subrahmanian, V.S., Udrea, O.: Machine Recognition of Human Activities: A Survey. IEEE T. Circ. Syst. Vid. 18, 1473–1488 (2008)

    Article  Google Scholar 

  3. Xiaofei, J., Honghai, L.: Advances in View-Invariant Human Motion Analysis: A Review. IEEE T. Syst. Man. Cy. C 40, 13–24 (2010)

    Article  Google Scholar 

  4. Samanta, S., Purkait, P., Chanda, B.: Indian Classical Dance classification by learning dance pose bases. In: 2012 IEEE Workshop on the Applications of Computer Vision, pp. 265–270. IEEE Press, Washington, DC (2012)

    Chapter  Google Scholar 

  5. Raptis, M., Kirovski, D., Hoppe, H.: Real-time classification of dance gestures from skeleton animation. In: 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 147–156. ACM, New York (2011)

    Chapter  Google Scholar 

  6. Deng, L., Leung, H., Gu, N., Yang, Y.: Recognizing Dance Motions with Segmental SVD. In: 20th International Conference on Pattern Recognition (ICPR), pp. 1537–1540. IEEE Press, Istanbul (2010)

    Google Scholar 

  7. Bo, P., Gang, Q.: Binocular dance pose recognition and body orientation estimation via multilinear analysis. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8. IEEE Press, Anchorage (2008)

    Google Scholar 

  8. Feng, G., Gang, Q.: Dance posture recognition using wide-baseline orthogonal stereo cameras. In: 7th International Conference on Automatic Face and Gesture Recognition, pp. 481–486. IEEE Press, Southampton (2006)

    Chapter  Google Scholar 

  9. Le, Q.V., Zou, W.Y., Yeung, S.Y., Ng, A.Y.: Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3361–3368. IEEE Press, Colorado (2011)

    Google Scholar 

  10. Laptev, I., Lindeberg, T.: Space-Time Interest Points. In: International Conference on Computer Vision (ICCV), Nice, pp. 432–439 (2003)

    Google Scholar 

  11. Wang, H., Kläser, A., Schmid, C., Liu, C.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vision 103, 60–79 (2013)

    Article  Google Scholar 

  12. Wang, H., Ullah, M.M., Kläser, A., Laptev, I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: British Machine Vision Conference, London, p. 127 (2009)

    Google Scholar 

  13. Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8. IEEE Press, Anchorage (2008)

    Google Scholar 

  14. Shi, J., Tomasi, C.: Good Features to Track. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 1994, pp. 593–600. IEEE Press, Seattle (1994)

    Google Scholar 

  15. Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part II. LNCS, vol. 3952, pp. 428–441. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kapsouras, I., Karanikolos, S., Nikolaidis, N., Tefas, A. (2013). Feature Comparison and Feature Fusion for Traditional Dances Recognition. In: Iliadis, L., Papadopoulos, H., Jayne, C. (eds) Engineering Applications of Neural Networks. EANN 2013. Communications in Computer and Information Science, vol 383. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41013-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41013-0_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41012-3

  • Online ISBN: 978-3-642-41013-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics