Skip to main content
Log in

Motion history image: its variants and applications

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

The motion history image (MHI) approach is a view-based temporal template method which is simple but robust in representing movements and is widely employed by various research groups for action recognition, motion analysis and other related applications. In this paper, we provide an overview of MHI-based human motion recognition techniques and applications. Since the inception of the MHI template for motion representation, various approaches have been adopted to improve this basic MHI technique. We present all important variants of the MHI method. This paper points some areas for further research based on the MHI method and its variants.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aggarwal, J., Cai, Q.: Human motion analysis: a review. In: Proc. Nonrigid and Articulated Motion Workshop, pp. 90–102 (1997)

  2. Aggarwal J.K., Cai Q.: Human motion analysis: a review. Comput. Vis. Image Underst. 73, 428–440 (1999)

    Article  Google Scholar 

  3. Aggarwal, J.K., Park, S.: Human motion: modeling and recognition of actions and interactions. In: Proc. Int. Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’04), p. 8 (2004)

  4. Ahad Md.A.R., Tan J.K., Kim H., Ishikawa S.: Lower-dimensional feature sets for template-based motion recognition approaches. J. Comput. Sci. 6(8), 920–927 (2010)

    Article  Google Scholar 

  5. Ahad, Md.A.R., Tan, J.K., Kim, H., Ishikawa, S.: A simple approach for low-resolution activity recognition. Int. J. Comput. Vis. Biomech. 3(1) (2010)

  6. Ahad Md.A.R., Tan J.K., Kim H., Ishikawa S.: Temporal motion recognition and segmentation approach. Int. J. Imaging Syst. Technol. 19, 91–99 (2009)

    Article  Google Scholar 

  7. Ahad, Md.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Human activity recognition: various paradigms. In: Proc. Int. Conf. on Control, Automation and Systems, pp. 1896–1901, October 2008

  8. Ahad Md.A.R., Ogata T., Tan J.K., Kim H., Ishikawa S.: A complex motion recognition technique employing directional motion templates. Int. J. Innov. Comput. Inf. Control 4(8), 1943–1954 (2008)

    Google Scholar 

  9. Ahad, Md.A.R., Ogata, T., Tan, J.K., Kim, H., Ishikawa, S.: Moment-based human motion recognition from the representation of DMHI templates. In: SICE Annual Conference, pp. 578–583, August 2008

  10. Ahad, Md.A.R., Ogata, T., Tan, J.K., Kim, H., Ishikawa, S.: A smart automated complex motion recognition technique. In: Proc. Workshop on Multi-dimensional and Multi-view Image Processing (with ACCV), pp. 142–149 (2007)

  11. Ahad Md.A.R., Tan J.K., Kim H., Ishikawa S.: Analysis of motion self-occlusion problem due to motion overwriting for human activity recognition. J. Multimed. 5(1), 36–46 (2009)

    Google Scholar 

  12. Ahad, Md.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Action recognition with various speeds and timed-DMHI feature vectors. In: Proc. Int. Conf. on Computer and Info. Tech., pp. 213–218, December 2008

  13. Ahad, Md.A.R., Tan J.K., Kim H., Ishikawa, S.: Human activity analysis: concentrating on motion history image and its variants. In: SICE-ICASE Joint Annual Conf., pp. 5401–5406 (2009)

  14. Ahmad M., Parvin I., Lee S.-W.: Silhouette history and energy image information for human movement recognition. J. Multimedia 5(1), 12–21 (2010)

    Google Scholar 

  15. Ahmad, M., Lee, S.-W.: Recognizing human actions based on silhouette energy image and global motion description. In: Proc. IEEE Automatic Face and Gesture Recognition, pp. 523–588 (2008)

  16. Ahmad, M., Hossain, M.Z.: SEI and SHI representations for human movement recognition. In: Proc. Int. Conf. on Computer and Information Technology (ICCIT), pp. 521–526 (2008)

  17. Ahad, Md.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Action recognition by employing combined directional motion history and energy images. In: IEEE Computer Society Conf. on Computer Vision and Pattern Recognition’s Workshop on CVCG, p. 6 (2010)

  18. Alahari, K., Jawahar, C.V.: Discriminative actions for recognizing events. In: Indian Conf. on Computer Vision, Graphics and Image Processing (ICVGIP’06), LNCS, vol. 4338, pp. 552–563 (2006)

  19. Albu A.B., Beugeling T.: A three-dimensional spatiotemporal template for interactive human motion analysis. J. Multimedia 2(4), 45–54 (2007)

    Google Scholar 

  20. Albu, A., Trevor, B., Naznin, V., Beach, C.: Analysis of irregularities in human actions with volumetric motion history images. In: Proc. IEEE Workshop on Motion and Video Computing, Texas, USA, p. 16, February 2007

  21. Anderson, C., Bert, P., Wal, G.V.: Change detection and tracking using pyramids transformation techniques. In: Proc. SPIE- Intelligent Robots and Computer Vision, vol. 579, pp. 72–78 (1985)

  22. Arseneau, S., Cooperstock, J.R.: Real-time image segmentation for action recognition. In: Proc. IEEE Pacific Rim Conf. on Communications, Computers and Signal Processing, pp. 86–89 (1999)

  23. Babu, R., Ramakrishnan, K.: Compressed domain human motion recognition using motion history information. In: Proc. ICIP, vol. 2, pp. 321–324 (2003)

  24. Babu R., Ramakrishnan K.: Recognition of human actions using motion history information extracted from the compressed video. Image Vis. Comput. 22, 597–607 (2004)

    Article  Google Scholar 

  25. Bashir, K., Xiang, T., Gong, S.: Feature selection for gait recognition without subject cooperation. In: British Machine Vision Conference, p. 10 (2008)

  26. Bashir, K., Xiang, T., Gong, S.: Feature selection on gait energy image for human identification. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 985–988 (2008)

  27. Beauchemin S.S., Barron J.L.: The computation of optical flow. ACM Comput. Surv. 27(3), 433–467 (1995)

    Article  Google Scholar 

  28. Bergen J.R., Burt P., Hingorani R., Peleg S.: A three frame algorithm for estimating two-component image motion. IEEE Trans. PAMI 14(9), 886–896 (1992)

    Article  Google Scholar 

  29. Bimbo, A.D., Nesi, P.: Real-time optical flow estimation. In: Proc. Int. Conf. on Systems Engineering in the Service of Humans, Systems, Man and Cybernetics, vol. 3, pp. 13–19 (1993)

  30. Bobick, A., Davis, J.: An appearance-based representation of action. In: Intl. Conf. on Pattern Recognition, pp. 307–312 (1996)

  31. Bobick A., Davis J.: The recognition of human movement using temporal templates. IEEE Trans. PAMI 23(3), 257–267 (2001)

    Article  Google Scholar 

  32. Bobick A., Intille S., Davis J., Baird F., Pinhanez C., Campbell L., Ivanov Y., Schutte A., Wilson A.: The Kidsroom: a perceptually-based interactive and immersive story environment. Presence: Teleoperators Virtual Environ. 8(4), 367–391 (1999)

    Google Scholar 

  33. Bradski, G., Davis, J.: Motion segmentation and pose recognition with motion history gradients. In: Proc. IEEE Workshop on Applications of Computer Vision, pp. 174–184, December 2000

  34. Bradski G., Davis J.: Motion segmentation and pose recognition with motion history gradients. Mach. Vis. Appl. 13(3), 174–184 (2002)

    Article  Google Scholar 

  35. Canton-Ferrer, C., Casas, J.R., Pardas, M.: Human model and motion based 3D action recognition in multiple view scenarios. In: Proc. Conf. European Signal Process, Italy, pp. 1–5, September 2006

  36. Canton-Ferrer, C., Casas, J.R., Pardàs, M., Sargin, M.E., Tekalp, A.M.: 3D human action recognition in multiple view scenarios. In: Proc. Jornades de Recerca en Automàtica, Visió i Robòtica, Barcelona (Spain), p. 5, 4–6 July 2006

  37. Cedras, C., Shah, M.: A survey of motion analysis from moving light displays. In: Proc. IEEE CVPR, pp. 214–221 (1994)

  38. Chandrashekhar, V., Venkatesh, K.S.: Action energy images for reliable human action recognition. In: Proc. of Asian Symposium on Information Display (ASID), pp. 484–487 (2006)

  39. Chen, D., Yang, J.: Exploiting high dimensional video features using layered Gaussian mixture models. In: Proc. IEEE ICPR, p. 4 (2006)

  40. Chen, D., Yan, R., Yang, J.: Activity analysis in privacy-protected video, p. 11. (2007). http://www.informedia.cs.cmu.edu/documents/T-MM_Privacy_J2c.pdf

  41. Chen C., Liang J., Zhao H., Hu H., Tian J.: Frame difference energy image for gait recognition with incomplete silhouettes. Pattern Recognit. Lett. 30(11), 977–984 (2003)

    Article  Google Scholar 

  42. Christmas, W.J.: Spatial filtering requirements for gradient-based optical flow measurement. In: 9th British Machine Vision Conference, pp. 185–194 (1998)

  43. Collins, R.T., Lipton, A., Kanade, T., Fujiyoshi, H., Duggins, D., Tsin, Y., Tolliver, D., Enomoto, N., Hasegawa, O., Burt, P., Wixson, L.: A system for video surveillance and monitoring. VSAM final report, CMU-RI-TR-00-12, Technical Report, Carnegie Mellon University, p. 69 (2000)

  44. Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: European Conference on Computer Vision, pp. 428–441 (2006)

  45. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Intl. Conf. on Computer Vision and Pattern Recognition, pp. 886–893 (2005)

  46. Davis, J.: Sequential reliable-inference for rapid detection of human actions. In: Proc. IEEE Workshop on Detection and Recognition of Events in Video, pp. 1–9, July 2004

  47. Davis, J.W.: Appearance-based motion recognition of human actions. M.I.T. Media Lab Perceptual Computing Group Tech. Report No. 387, p. 51 (1996)

  48. Davis, J., Bradski, G.: Real-time motion template gradients using Intel CVLib. In: Proc. ICCV Workshop on Frame-Rate Vision, pp. 1–20, September 1999

  49. Davis, J.: Hierarchical motion history images for recognizing human motion. In: Proc. IEEE Workshop on Detection and Recognition of Events in Video, pp. 39–46 (2001)

  50. Davis, J., Bobick, A.: Virtual PAT: a virtual personal aerobics trainer. In: Proc. Perceptual User Interfaces, pp. 13–18, November 1998

  51. Davis, J.: Recognizing movement using motion histograms. MIT Media Lab. Perceptual Computing Section Tech. Report No. 487 (1998)

  52. Davis, J.W., Morison, A.M., Woods, D.D.: Building adaptive camera models for video surveillance. In: Proc. IEEE Workshop on Applications of Computer Vision (WACV’07), p. 6 (2007)

  53. Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatiotemporal features. In: Intl. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72, October 2005

  54. Digital Imaging Research Centre, K.U.L.: Virtual Human Action Silhouette (ViHASi) Database. http://dipersec.king.ac.uk/VIHASI/

  55. Efros, A.A., Berg, A.C., Mori, G., Malik, J.: Recognizing action at a distance. In: Proc. ICCV, pp. 726–733 (2003)

  56. Elgammal, A., Harwood, D., David, L.S.: Nonparametric background model for background subtraction. In: Proc. European Conference on Computer Vision, p. 17 (2000)

  57. Essa, I., Pentland, S.: Facial expression recognition using a dynamic model and motion energy. In: Proc. IEEE CVPR, p. 8, June 1995

  58. Forbes, K.: Summarizing motion in video sequences, pp. 1–7. http://thekrf.com/projects/motionsummary/MotionSummary.pdf. Accessed 9 May 2004

  59. Full-body Gesture Database, Korea University. http://gesturedb.korea.ac.kr/

  60. Fischler M.A., Bolles R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  61. Gavrilla D.: The visual analysis of human movement: a survey. Comput. Vis. Image Underst. 73, 82–98 (1999)

    Article  Google Scholar 

  62. Gorelick L., Blank M., Shechtman E., Irani M., Basri R.: Actions as space-time shapes. IEEE Trans. PAMI 29(12), 2247–2253 (2007)

    Article  Google Scholar 

  63. Han J., Bhanu B.: Individual recognition using gait energy image. IEEE Trans. PAMI 28(2), 316–322 (2006)

    Article  Google Scholar 

  64. Han, J., Bhanu, B.: Gait energy image representation: comparative performance evaluation on USF HumanID database. In: Proc. Joint Intl. Workshop VS-PETS, pp. 133–140 (2003)

  65. Haritaoglu I., Harwood D., Davis L.S.: W4: real-time surveillance of people and their activities. IEEE Trans. PAMI 22(8), 809–830 (2000)

    Article  Google Scholar 

  66. Horn B., Schunck B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  67. Hu W., Tan T., Wang L., Maybank S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. SMC-Part C. 34(3), 334–352 (2004)

    Google Scholar 

  68. Hu M.K.: Visual pattern recognition by moment invariants. IRE Trans. Info. Theory 8, 179–187 (1962)

    MATH  Google Scholar 

  69. Jaimes A., Sebe N.: Multimodal human-computer interaction: a survey. Comput. Vis. Image Underst. 108(1–2), 116–134 (2007)

    Article  Google Scholar 

  70. Jain A., Duin R., Mao J.: Statistical pattern recognition: a review. IEEE Trans. PAMI 2(1), 4–37 (2000)

    Article  Google Scholar 

  71. Jan, T.: Neural network based threat assessment for automated visual surveillance. In: Proc. IEEE Joint Conf. on Neural Networks, vol. 2, pp. 1309–1312, July 2004

  72. Jin, T., Leung, M.K.H., Li, L.: Temporal human body segmentation. In: Villanieva, J.J. (ed.) IASTED Int. Conf. Visualization, Imaging, and Image Processing (VIIP’04). Acta Press, Marbella. ISSN: 1482-7921, 6–8 September 2004

  73. Kellokumpu, V., Zhao, G., Pietikäinen, M.: Texture based description of movements for activity analysis. In: Proc. Conf. Computer Vision Theory and Applications (VISAPP’08), vol. 2, pp. 368–374, Portugal (2008)

  74. Kilger, M.: A shadow handler in a video-based real-time traffic monitoring system. In: Proc. IEEE Workshop on Applications of Computer Vision, pp. 1060–1066 (1992)

  75. Kadir T., Brady M.: Scale, saliency and image description. IJCV 45(2), 83–105 (2001)

    Article  MATH  Google Scholar 

  76. Kameda, Y., Minoh, M.: A human motion estimation method using 3-successive video frames. In: Proc. Int. Conf. on Virtual Systems and Multimedia, p. 6 (1996)

  77. Ke, Y., Sukthankar, R., Hebert, M.: Efficient visual event detection using volumetric features. In: ICCV, vol. 1, pp. 166–173 (2005)

  78. Kellokumpu, V., Pietikäinen, M., Heikkilä, J.: Human activity recognition using sequences of postures. Mach. Vis. Appl., pp. 570–573 (2005)

  79. Kienzle, W., Scholkopf, B., Wichmann, F.A., Franz, M.O.: How to find interesting locations in video: a spatiotemporal interest point detector learned from human eye movements. In: 29th DAGM Symposium, pp. 405–414, September 2007

  80. Kindratenko, V.: Development and application of image analysis techniques for identification and classification of microscopic particles. PhD thesis, University of Antwerp, Belgium (1997). http://www.ncsa.uiuc.edu/~kindr/phd/index.pdf

  81. Khotanzad A., Hong Y.H.: Invariant image recognition by Zernike moments. IEEE Trans. PAMI 12(5), 489–497 (1990)

    Article  Google Scholar 

  82. Kumar, S., Kumar, D., Sharma, A., McLachlan, N.: Classification of hand movements using motion templates and geometrical based moments. In: Proc. Int’l Conf. on Intelligent Sensing and Information Processing, pp. 299–304 (2003)

  83. Laptev, I., Lindeberg, T.: Space-time interest points. In: ICCV, vol. 1, p. 432 (2003)

  84. Laptev I.: On space-time interest points. IJCV 64(2), 107–123 (2005)

    Article  MathSciNet  Google Scholar 

  85. LaViola, J.: A survey of hand posture and gesture recognition techniques and technology. Tech. Report CS-99-11, Brown University, p. 80, June 1999

  86. Leman K., Ankit G., Tan T.: PDA-based human motion recognition system. Int. J. Softw. Eng. Knowl. 2(15), 199–205 (2005)

    Article  Google Scholar 

  87. Li, L., Zeng, Q., Jiang, Y., Xia, H.: Spatio-temporal motion segmentation and tracking under realistic condition. In: Proc. Int’l Symposium on Systems and Control in Aerospace and Astronautics, pp. 229–232 (2003)

  88. Lipton, A.J., Fujiyoshi, H., Patil, R.S.: Moving target classification and tracking from real-time video. In: Proc. IEEE Workshop on Applications of Computer Vision, pp. 8–14 (1998)

  89. Liu, J., Zhang, N.: Gait history image: a novel temporal template for gait recognition. In: Proc. IEEE Int. Conf. Multimedia and Expo, pp. 663–666 (2007)

  90. Lo C., Don H.: 3-D moment forms: their construction and application to object identification and positioning. IEEE Trans. PAMI 11(10), 1053–1063 (1989)

    Article  Google Scholar 

  91. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. Int. Joint Conf. on Artificial Intelligence, pp. 674–679 (1981)

  92. Ma, Q., Wang, S., Nie, D., Qiu, J.: Recognizing humans based on gait moment image. In: 8th ACIS Intl. Conf. on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, pp. 606–610 (2007)

  93. Masoud O., Papanikolopoulos N.: A method for human action recognition. Image Vis. Comput. 21, 729–743 (2003)

    Article  Google Scholar 

  94. McCane B., Novins K., Crannitch D., Galvin B.: On benchmarking optical flow. Comput. Vis. Image Underst. 84, 126–143 (2001)

    Article  MATH  Google Scholar 

  95. McKenna S.J., Jabri S., Duric Z., Wechsler H., Rosenfeld A.: Tracking groups of people. Comput. Vis. Image Underst. 80(1), 42–56 (2000)

    Article  MATH  Google Scholar 

  96. Meng, H., Pears, N., Bailey, C.: A human action recognition system for embedded computer vision application. In: Proc. Workshop on Embedded Computer Vision (with CVPR), pp. 1–6 (2007)

  97. Meng, H., Pears, N., Bailey, C.: Human action classification using SVM_2K classifier on motion features. In: LNCS: Multimedia Content Representation, Classification and Security, vol. 4105/2006, pp. 458–465 (2006)

  98. Meng, H., Pears, N., Bailey, C.: Motion information combination for fast human action recognition. In: Proc. Conf. Computer Vision Theory and Applications (VIASAPP07), Spain, March 2007

  99. Meng, H., Pears, N., Bailey, C.: Recognizing human actions based on motion information and SVM. In: Proc. IEE Int. Conf. Intelligent Environments, pp. 239–245 (2006)

  100. Meng, H., Pears, N., Freeman, M., Bailey, C.: Motion history histograms for human action recognition. In: Embedded Computer Vision (Advances in Pattern Recognition), part II, pp. 139–162. Springer, London (2009)

  101. Mittal, A., Paragois, N.: Motion-based background subtraction using adaptive kernel density estimation. In: Proc. IEEE CVPR, p. 8 (2004)

  102. Moeslund, T.B.: Summaries of 107 computer vision-based human motion capture papers. Tech. Report: LIA 99-01, University of Aalborg, p. 83, March 1999

  103. Moeslund T.B., Granum E.: A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81, 231–268 (2001)

    Article  MATH  Google Scholar 

  104. Moeslund T.B., Hilton A., Kruger V.: A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 104, 90–126 (2006)

    Article  Google Scholar 

  105. Ng, J., Gong, S.: Learning pixel-wise signal energy for understanding semantics. In: Proc. BMVC, pp. 695–704 (2001)

  106. Ng J., Gong S.: Learning pixel-wise signal energy for understanding semantics. Image Vis. Comput. 21, 1183–1189 (2003)

    Article  Google Scholar 

  107. Nguyen, Q., Novakowski, S., Boyd, J.E., Jacob, C., Hushlak, G.: Motion swarms: video interaction for art in complex environments. In: Proc. ACM Int. Conf. Multimedia, CA, pp. 461–469 (2006)

  108. Ogata T., Tan J.K., Ishikawa S.: High-speed human motion recognition based on a motion history image and an Eigenspace. IEICE Trans. Inf. Syst. E89-D(1), 281–289 (2006)

    Article  Google Scholar 

  109. Oikonomopoulos A., Patras I., Pantic M.: Spatiotemporal salient points for visual recognition of human actions. IEEE Trans. Syst. Man Cybern. B: Cybern. 36(3), 710–719 (2006)

    Article  Google Scholar 

  110. Orrite, C., Martınez, F., Herrero, E., Ragheb, H., Velastin, S.: Independent viewpoint silhouette-based human action modelling and recognition. In: Proc. Int. Workshop on Machine Learning for Vision-based Motion Analysis (MLVMA’08) with ECCV, pp. 1–12 (2008)

  111. Pantic, M., Pentland, A., Nijholt, A., Hunag, T.S.: Human computing and machine understanding of human behavior: a survey. In: Proc. Int. Conf. on Multimodal Interfaces, pp. 239–248 (2006)

  112. Pantic, M., Patras, I., Valstar, M.F.: Learning spatio-temporal models of facial expressions. In: Proc. Int. Conf. on Measuring Behaviour, pp. 7–10, September 2005

  113. Papenberg N., Bruhn A., Brox T., Didas S., Weickert J.: Highly accurate optic flow computation with theoretically justified warping. Int. J. Comput. Vis. 67(2), 141–158 (2006)

    Article  Google Scholar 

  114. Pavlovic V., Sharma R., Huang T.: Visual interpretation of hand gestures for human-computer interaction: a review. IEEE Trans. PAMI 19(7), 677–695 (1997)

    Article  Google Scholar 

  115. Piater, J., Crowley, J.: Multi-modal tracking of interacting targets using Gaussian approximations. In: Proc. IEEE Workshop on Performance Evaluation of Tracking and Surveillance at CVPR, pp. 141–147 (2001)

  116. Petrás, I., Beleznai, C., Dedeoğlu, Y., Pardàs, M., et al.: Flexible test-bed for unusual behavior detection. In: Proc. ACM Conf. Image and Video Retrieval, pp. 105–108 (2007)

  117. Polana, R., Nelson, R.: Low level recognition of human motion. In: Proc. IEEE Workshop on Motion of Non-rigid and Articulated Objects, pp. 77–82 (1994)

  118. Poppe R.: Vision-based human motion analysis: an overview. Comput. Vis. Image Underst. 108(1–2), 4–18 (2007)

    Article  Google Scholar 

  119. Rapantzikos, K., Avrithis, Y., Kollias, S.: Dense saliency-based spatiotemporal feature points for action recognition. In: Intl. Conf. on Computer Vision and Pattern Recognition, pp. 1–8 (2009)

  120. Rhne-Alpes, I.: The Inria XMAS (IXMAS) motion acquisition sequences. http://charibdis.inrialpes.fr

  121. Roh, M.-C., Shin, H.-K., Lee, S.-W., Lee, S.-W.: Volume motion template for view-invariant gesture recognition. In: Proc. ICPR, vol. 2, pp. 1229–1232 (2006)

  122. Rosales, R.: Recognition of human action using moment-based features. Boston University Computer Science Tech. Report, BU 98-020, 1–19, November 1998

  123. Rosales, R., Sclaroff, S.: 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In: Proc. CVPR, vol. 2, pp. 117–123 (1999)

  124. Ryu, W., Kim, D., Lee, H.-S., Sung, J., Kim, D.: Gesture recognition using temporal templates. In: Proc. ICPR, Demo Program, Hong Kong, August 2006

  125. Ruiz-del-Solar, J., Vallejos, P.A.: Motion detection and tracking for an AIBO robot using camera motion compensation and Kalman filtering. In: Proc. RoboCup Int. Symposium 2004, Lisbon, LNCS, vol. 3276, pp. 619–627 (2005)

  126. Sarkar S., Phillips P.J., Liu Z., Vega I.R., Grother P., Bowyer K.W.: The humanid gait challenge problem: data sets, performance, and analysis. IEEE Trans. PAMI 27(2), 162–177 (2005)

    Article  Google Scholar 

  127. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proc. ICPR, vol. 3, pp. 32–36 (2004)

  128. Senior, A., Tosunoglu, S.: Hybrid machine vision control. In: Florida Con. on Recent Advances in Robotics, pp. 1–6, May 2005

  129. Shan, C., Wei, Y., Qiu, X., Tan, T.: Gesture recognition using temporal template based trajectories. In: Proc. ICPR, vol. 3, pp. 954–957 (2004)

  130. Shin, H.-K., Lee, S.-W., Lee, S.-W.: Real-time gesture recognition using 3D motion history model. In: Proc. Conf. on Intelligent Computing, Part I, LNCS, vol. 3644, pp. 888–898, China, August 2005

  131. Sigal, L., Black, M.J.: HumanEva: Synchronized video and motion capture dataset for evaluation of articulated human motion. Department of Computer Science, Brown University, Tech. Report CS-06-08, p. 18, September 2006

  132. Singh, R., Seth, B., Desai, U.: A real-time framework for vision based human robot interaction. In: Proc. IEEE/RSJ Conf. on Intelligent Robots and Systems, pp. 5831–5836 (2006)

  133. Son, D., Dinh, T., Nam, V., Hanh, T., Lam, H.: Detection and localization of road area in traffic video sequences using motion information and fuzzy-shadowed sets. In: Proc. IEEE Int’l Symp. Multimedia, pp. 725–732, December 2005

  134. Spengler M., Schiele B.: Towards robust multi-cue integration for visual tracking. Mach. Vis. Appl. 14, 50–58 (2003)

    Article  Google Scholar 

  135. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE CVPR, vol. 2, pp. 246–252 (1999)

  136. Sun, H.Z., Feng, T., Tan, T.N.: Robust extraction of moving objects from image sequences. In: Proc. Asian Conference on Computer Vision, pp. 961–964 (2000)

  137. Sziranyi, T.: with other partners UPC, SZTAKI, Bilkent and ACV: real time detector for unusual behavior. http://www.muscle-noe.org/content/view/147/64/

  138. Talukder, A., Goldberg, S., Matthies, L., Ansar, A.: Real-time detection of moving objects in a dynamic scene from moving robotic vehicles. In: Proc. IEEE/RSJ Intl Conference on Intelligent Robots and Systems, pp. 1308–1313 (2003)

  139. Tan, J.K., Ishikawa, S.: High accuracy and real-time recognition of human activities. In: 33rd Annual Conf. of IEEE Industrial Electronics Society (IECON), pp. 2377–2382 (2007)

  140. Vafadar, M., Behrad, A.: Human hand gesture recognition using motion orientation histogram for interaction of handicapped persons with computer. In: Elmoataz, A., et al. (eds.) ICISP 2008, LNCS, vol. 5099, pp. 378–385 (2008)

  141. Valstar, M., Pantic, M., Patras, I.: Motion history for facial action detection in video. In: Proc. IEEE Int. Conf. SMC, vol. 1, pp. 635–640 (2004)

  142. Valstar, M., Patras, I., Pantic, M.: Facial action recognition using temporal templates. In: Proc. IEEE Workshop on Robot and Human Interactive Communication, pp. 253–258 (2004)

  143. Vitaladevuni, S.N., Kellokumpu, V., Davis, L.S.: Action recognition using ballistic dynamics. In: Proc. CVPR, p. 8 (2008)

  144. Wang L., Hu W., Tan T.: Recent developments in human motion analysis. Pattern Recognit. 36, 585–601 (2003)

    Article  Google Scholar 

  145. Wang J.J.L., Singh S.: Video analysis of human dynamics—a survey. Real-Time Imaging 9(5), 321–346 (2006)

    Article  MATH  Google Scholar 

  146. Wang, C., Brandstein, M.S.: A hybrid real-time face tracking system. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, p. 4 (1998)

  147. Wang L., Suter D.: Informative shape representations for human action recognition. Intl. Conf. Pattern Recognit. 2, 1266–1269 (2006)

    Google Scholar 

  148. Watanabe, K., Kurita, T.: Motion recognition by higher order local auto correlation features of motion history images. In: Proc. Bio-inspired, Learning and Intelligent Systems for Security, pp. 51–55 (2008)

  149. Wei, J., Harle, N.: Use of temporal redundancy of motion vectors for the increase of optical flow calculation speed as a contribution to real-time robot vision. In: Proc. IEEE TENCON—Speech and Image Technologies for Computing and Telecommunications, pp. 677–680 (1997)

  150. Weinland, D., Ronfard, R., Boyer, E.: Automatic discovery of action taxonomies from multiple views. In: Proc. CVPR, pp. 1639–1645 (2006)

  151. Weinland D., Ronfard R., Boyer E.: Free viewpoint action recognition using motion history volumes. Comput. Vis. Image Underst. 104(2), 249–257 (2006)

    Article  Google Scholar 

  152. Willems, G., Tuytelaars, T., Gool, L.V.: An efficient dense and scale-invariant spatio-temporal interest point detector. In: 10th European Conference on Computer Vision, pp. 650–663 (2008)

  153. Wixson L.: Detecting salient motion by accumulating directionally-consistent flow. IEEE Trans. PAMI 22(8), 774–780 (2000)

    Article  Google Scholar 

  154. Wong, S.F., Cipolla, R.: Continuous gesture recognition using a sparse Bayesian classifier. In: Intl. Conf. on Pattern Recognition, vol. 1, pp. 1084–1087 (2006)

  155. Wong, S.F., Cipolla, R.: Real-time adaptive hand motion recognition using a sparse Bayesian classifier. In: Intl. Conf. on Computer Vision Workshop, pp. 170–179 (2005)

  156. Wong, S.F., Cipolla, R.: Extracting spatiotemporal interest points using global information. In: ICCV, pp. 1–8 (2007)

  157. Wren, R., Clarkson, B.P., Pentland, A.P.: Understanding purposeful human motion. In: Proc. Int’l Conf. on Automatic Face and Gesture Recognition, pp. 19–25 (1999)

  158. Wren C.R., Azarbayejani A., Darrell T., Pentland A.P.: Pfinder: real-time tracking of the human body. IEEE Trans. PAMI 19(7), 780–785 (1997)

    Article  Google Scholar 

  159. Xiang T., Gong S.: Beyond tracking: modelling activity and understanding behaviour. Int. J. Comput. Vis. 67(1), 21–51 (2006)

    Article  Google Scholar 

  160. Yang Y.H., Levine M.D.: The background primal sketch: an approach for tracking moving objects. Mach. Vis. Appl. 5, 17–34 (1992)

    Article  Google Scholar 

  161. Yang, X., Zhang, T., Zhou, Y., Yang, J.: Gabor phase embedding of gait energy image for identity recognition. In: 8th IEEE Intl. Conf. on Computer and Information Technology, pp. 361–366, July 2008

  162. Yau, W., Kumar, D., Arjunan, S., Kumar, S.: Visual speech recognition using image moments and multiresolution wavelet. In: Proc. Conf. on Computer Graphics, Imaging and Visualization, pp. 194–199 (2006)

  163. Yau, W., Kumar, D., Arjunan, S.: Voiceless speech recognition using dynamic visual speech features. In: Proc. HCSNet Workshop on the Use of Vision in HCI, Australia (2006)

  164. Yilmaz, A., Shah, M.: Actions sketch: a novel action representation. In: IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 984–989 (2005)

  165. Yin, Z., Collins, R.: Moving object localization in thermal imagery by forward-backward MHI. In: Proc. IEEE Workshop on Object Tracking and Classification in and Beyond the Visible Spectrum, NY, pp. 133–140, June 2006

  166. Yu C.-C., Cheng H.-Y., Cheng C.-H., Fan K.-C.: Efficient human action and gait analysis using multiresolution motion energy histogram. EURASIP J. Adv. Signal Process. 2010, 1–13 (2010)

    Google Scholar 

  167. Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: Intl. Conf. on Pattern Recognition, pp. 441–444 (2006)

  168. Zhang D., Lu G.: Review of shape representation and description techniques. Pattern Recognit. 37, 1–19 (2004)

    Article  MATH  Google Scholar 

  169. Zhou, H., Hu, H.: A survey—human movement tracking and stroke rehabilitation. Tech. Report: CSM-420, Department of Computer Sciences, University of Essex, p. 33, December 2004

  170. Zou, X., Bhanu, B.: Human activity classification based on gait energy image and co-evolutionary genetic programming. In: Proc. ICPR, vol. 3, pp. 555–559 (2006)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md. Atiqur Rahman Ahad.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahad, M.A.R., Tan, J.K., Kim, H. et al. Motion history image: its variants and applications. Machine Vision and Applications 23, 255–281 (2012). https://doi.org/10.1007/s00138-010-0298-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-010-0298-4

Keywords

Navigation