Skip to main content

Representing Feature Quantization Approach Using Spatial-Temporal Relation for Action Recognition

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7143))

Abstract

In this paper we propose an efficient & intuitive algorithm for the design of feature vector quantization using space-time interest point in video surveillance. The performance of activity recognition is generally depend upon the quantity of significant features but with proper feature quantization one can delivered the same performance with less number of features. The basic characteristics of algorithm are discussed and demonstrated by experiment. It is scalable in nature and work efficiently under varying conditions. In an experiment section, we show that our novel feature quantization approach takes less number of features in compared to standard quantization, while delivering the same performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

    Google Scholar 

  2. Savarese, S., Delpozo, A., Niebles, J., Fei-Fei, L.: Spatial-temporal correlatons for unsupervised action classification. In: Proc. IEEE Workshop on Motion and Video Computing, pp. 1–8 (2008)

    Google Scholar 

  3. Zarka, N., Alhalah, Z., Deeb, R.: Real-Time Human Motion Detection and Tracking. In: 3rd International Conference on ICTTA, vol. 08, pp. 1–6 (2008)

    Google Scholar 

  4. Turaga, P., Chellappa, R., Subrahmaniam, V.S., Udrea, O.: Machine Recognition of Human Activities: A Survey. IEEE Transactions on Circuits and Systems for Video Technology 18 (11), 1473–1488 (2008)

    Article  Google Scholar 

  5. Benezeth, Y., Jodoin, P.M., Emile, B., Laurent, H., Rosenberger, C.: Review and Evaluation of Commonly-Implemented Background Subtraction Algorithms. In: 19th IEEE International Conference on ICPR, pp. 1–4 (2008)

    Google Scholar 

  6. Liu, J., Yang, J., Zhang, Y.: Action Recognition by Multiple Features and Hyper-sphere Multi-class SVM. In: 20th IEEE International Conference on ICPR, pp. 3744–3747 (2010)

    Google Scholar 

  7. Bregonzio, M., Gong, S., Xiang, T.: Recognising Action as Clouds of Space-Time Interest Points. In: IEEE International Conference on CVPR, pp. 1948–1955 (2009)

    Google Scholar 

  8. Liu, J., Shah, M.: Learning human actions via information maximization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

    Google Scholar 

  9. Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as Space-Time Shapes. In: 10th IEEE International Conference on ICCV, vol. 02, pp. 1395–1402 (2005)

    Google Scholar 

  10. Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior Recognition via Sparse Spatio-Temporal Features. In: 2nd Joint IEEE Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, vol. 02, pp. 65–72 (2005)

    Google Scholar 

  11. Shabani, A.H., Zelek, J.S., Clausi, D.A.: Human Action Recognition using Salient Opponent-Based Motion Features. In: 7th Canadian Conference on Computer and Robot Vision, pp. 362–369. IEEE Computer Society (2010)

    Google Scholar 

  12. Kovashka, A., Grauman, K.: Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2046–2053 (2010)

    Google Scholar 

  13. Yan, X., Luo, Y.: Making Full Use of Spatial-Temporal Interest Points: An Adaboost Aproach for Action Recognition. In: 17th IEEE International Conference on Image Processing, pp. 4677–4680 (2010)

    Google Scholar 

  14. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach (2004), http://www.nada.kth.se/cvap/actions/

  15. Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as Space-Time Shapes (2005), http://www.wisdom.weizmann.ac.il/vision/SpaceTimeActions.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vishwakarma, S., Agrawal, A. (2012). Representing Feature Quantization Approach Using Spatial-Temporal Relation for Action Recognition. In: Kundu, M.K., Mitra, S., Mazumdar, D., Pal, S.K. (eds) Perception and Machine Intelligence. PerMIn 2012. Lecture Notes in Computer Science, vol 7143. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27387-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27387-2_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27386-5

  • Online ISBN: 978-3-642-27387-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics