Skip to main content

A Simple Human Interaction Recognition Based on Global GIST Feature Model

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9244))

Included in the following conference series:

  • 5119 Accesses

Abstract

The precision of human interaction recognition mainly relies on the discrimination of action feature descriptors. The descriptors contained global and local information usually can be applied to classify interaction actions. A novel approach is proposed by using the Gist feature model to recognize human interaction actions, which has an advantage of simple feature, easy to operate, good real-time and flexible applications. Taking advantage of the theories with Gaussian pyramid and center-surround mechanism, the gist features from three channels are extracted to represent the human interaction motion, then the classification result is obtained by using frame to frame nearest neighbor classifier and weighted fusion them. The method is tested on UT-Interaction dataset. The experiments show that the method obtained stable performance with simple implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cao, Y., Barrett, D., Barbu, A., et al.: Recognize Human Activities from Partially Observed Videos. IEEE Conf. on Computer Vision & Pattern Recognition 9(4), 2658–2665 (2013)

    Google Scholar 

  2. Burghouts, G.J., Schutte, K.: Spatio-temporal layout of human actions for improved bag-of-words action detection. Pattern Recognition Letters 34(15), 1861–1869 (2013)

    Article  Google Scholar 

  3. El Houda Slimani, K.N., Benezeth, Y., Souami, F.: Human interaction recognition based on the co-occurrence of visual words. In: IEEE Conf. on Computer Vision & Pattern Recognition Workshops, Columbus, Ohio, USA, pp. 461–466 (2014)

    Google Scholar 

  4. Zhang, Y., Liu, X., Chang, M.-C., Ge, W., Chen, T.: Spatio-temporal phrases for activity recognition. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 707–721. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  5. Ji, X., Liu, H.: Advances in View-Invariant Human Motion Analysis: A Review. IEEE Transactions on Systems Man & Cybernetics Part C Applications & Reviews 40(1), 13–24 (2010)

    Google Scholar 

  6. Yuan, F., Sahbi, H., Prinet, V.: Spatio-temporal context kernel for activity recognition. In: 1st Asian Conference on Pattern Recognition, Beijing, China, pp. 436–440 (2011)

    Google Scholar 

  7. Burghouts, G.J., Schutte, K.: Spatio-temporal layout of human actions for improved bag-of-words action detection. Pattern Recognition Letters 34(15), 1861–1869 (2013)

    Article  Google Scholar 

  8. Peng, X., Peng, Q., Qiao, Y.. Exploring dense trajectory feature and encoding methods for human interaction recognition. In: Proc. of the Fifth Int. Conf. on Internet Multimedia Computing and Service, Huangshan, China, pp. 23–27 (2013)

    Google Scholar 

  9. Li, N., Cheng, X., Guo, H., Wu, Z.: A hybrid method for human interaction recognition using spatio-temporal interest points. In: Proc of the 22nd Int. Conf. on Pattern Recognition, Stockholm, Sweden, pp. 2513–2518 (2014)

    Google Scholar 

  10. Oliva, A., Torralba, A.: Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision 3(42), 145–175 (2001)

    Article  Google Scholar 

  11. Siagian, C., Itti, L.: Rapid biologically-inspired scene classification using features shared with visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 2(29), 300–312 (2007)

    Article  Google Scholar 

  12. Wang, Y., Li, Y., Ji, X.: Recognizing human actions based on gist descriptor and word phrase. In: Proc. 2013 Int. Conf. Mechatronic Sciences, Electric Engineering and Computer, Shenyang, China, pp. 1104–1107 (2013)

    Google Scholar 

  13. Ji, X., Zhou, L., Li, Y.: Human action recognition based on AdaBoost algorithm for feature extraction. In: Proc. of the IEEE Int. Conf. on Computer and Information Technology, Xi’an, China, pp. 801–805 (2014)

    Google Scholar 

  14. Ryoo, M., Aggarwal, J.: Spatio-temporal relationship match: video structure comparison for recognition of complex human activities. In: Proc. of the IEEE Int. Conf. on Computer Vision, Kyoto, pp. 1593–1600 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaofei Ji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Ji, X., Zuo, X., Wang, C., Wang, Y. (2015). A Simple Human Interaction Recognition Based on Global GIST Feature Model. In: Liu, H., Kubota, N., Zhu, X., Dillmann, R., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2015. Lecture Notes in Computer Science(), vol 9244. Springer, Cham. https://doi.org/10.1007/978-3-319-22879-2_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-22879-2_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-22878-5

  • Online ISBN: 978-3-319-22879-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics