Skip to main content

Advertisement

Log in

RegFrame: fast recognition of simple human actions on a stand-alone mobile device

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In recent years, human action recognition in videos has become an active research topic, being applied in surveillance, security, somatic games, interactive operations, etc. Since most human action recognition systems are designed for PCs, their performance is poor when transplanted to mobile devices. In this paper, we develop a human action recognition system called “RegFrame,” which can rapidly and accurately recognize simple human actions, including 3D actions, on a stand-alone mobile device. The system divides an action recognition process into two steps: object recognition and movement detection. The movement detection is implemented by a novel Nine-Square algorithm that nearly avoids floating point computing, which improves the recognition time. The experimental results show that the proposed “RegFrame” works reliably in different testing scenarios, and it outperforms the action recognition method of the SAMSUNG Galaxy V (S5) by up to 20% in terms of action recognition time. In addition, the proposed system can be flexibly integrated with a variety of applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Chang SF, Chen W, Meng HJ et al (1997) VideoQ: an automated content based video search system using visual cues. In: Proceedings of the fifth ACM international conference on multimedia-ACM multimedia, pp 313–324

  2. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. IJCV 60(2):91–110

    Article  MathSciNet  Google Scholar 

  3. Schdt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Pattern recognition-ICPR, pp 32–36

  4. Eweiwi A, Cheema MS, Bauckhage C (2015) Action recognition in still images by learning spatial interest regions from videos. Pattern Recogn Lett 51:8–15

    Article  Google Scholar 

  5. Chai L, Wei Z, Li Z (2015) Mobile real-time monitoring system based on human action recognition. In: Proceedings of the 4th international conference on computer engineering and networks-CENet2014, pp 607–614

    Chapter  Google Scholar 

  6. Li Z, Gao L, Katsaggelos A K (2006) Locally embedded linear subspaces for efficient video indexing and retrieval. In: Multimedia and expo-ICME, pp 1765–1768

  7. Corcoran P (2015) To gaze with undimmed eyes on all darkness [IP Corner]. In: Consumer electronics magazine, pp 99–103

    Article  Google Scholar 

  8. Zheng H, Li Z, Fu Y (2009) Efficient human action recognition by luminance field trajectory and geometry information. In: Multimedia and expo-ICME, pp 842–845

  9. Belhumeur P N, Hespanha J P, Kriegman D J (1997) Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. In: Pattern analysis and machine intelligence, pp 711–720

    Article  Google Scholar 

  10. Lienhart R, Maydt J (2002) An extended set of haar-like features for rapid object detection. In: Proceedings of the international conference on image processing, vol 1, pp I-900

  11. Li Z, Fu Y, Huang T, Yan S (2008) Real-time human action recognition by luminance field trajectory analysis. In: Proceedings of the 16th ACM international conference on multimedia. ACM, pp 671–676

  12. Han D, Liang H, Shen X et al (2014) Subscriber dynamic characteristics-based wireless network accessing bandwidth prediction. Int J Mach Learn Cybern 5(6):875–885

    Article  Google Scholar 

  13. Wu B, Ai H, Huang C, et al. (2004) Fast rotation invariant multi-view face detection based on real adaboost. In: Automatic face and gesture recognition-FG, pp 79–84

  14. Felzenszwalb PF, Girshick RB, McAllester D (2010) Cascade object detection with deformable part models. In: Computer vision and pattern recognition-CVPR, pp 2241–2248

  15. Gu C, Arbelz P, Lin Y et al (2012) Multi-component models for object detection. Computer vision-ECCV 2012. Springer, Berlin

    Google Scholar 

  16. Pedersoli M, Vedaldi A, Gonzalez J (2011) A coarse-to-fine approach for fast deformable object detection. In: Computer vision and pattern recognition-CVPR, pp 1353–1360

  17. Rahimi MR, Ren J, Liu CH et al (2014) Mobile cloud computing: a survey, state of art and future directions. MONET 19(2):133–143

    Google Scholar 

  18. Wang J, Liu Z, Wu Y, et al. (2012) Mining actionlet ensemble for action recognition with depth cameras. In: Computer vision and pattern recognition-CVPR, pp 1290–1297

  19. Lin Y C, Hu M C, Cheng W H, et al. (2012) Human action recognition and retrieval using sole depth information. In: Proceedings of the 20th ACM international conference on multimedia-MM, pp 1053–1056

  20. Song HO, Zickler S, Althoff T, et al. (2012) Sparselet models for efficient multiclass object detection. In: Computer vision-ECCV, pp 802–815

    Chapter  Google Scholar 

  21. Yagnik J, Strelow D, Ross DA, et al. (2011) The power of comparative reasoning. In: Computer vision-ICCV, pp 2431–2438

  22. Zhu L, Chen Y, Yuille A, Freeman W (2010) Latent hierarchical structural learning for object detection. In: Computer vision and pattern recognition-CVPR, pp 1062–1069

  23. Zhu X, Huang Z, Yang Y et al (2013) Self-taught dimensionality reduction on the high-dimensional small-sized data. Pattern Recognit 46(1):215–229

    Article  Google Scholar 

  24. Shotton J, Sharp T, Kipman A et al (2013) Real-time human pose recognition in parts from single depth images. CACM 56(1):116–124

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianqing Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, D., Li, J., Zeng, Z. et al. RegFrame: fast recognition of simple human actions on a stand-alone mobile device. Neural Comput & Applic 30, 2787–2793 (2018). https://doi.org/10.1007/s00521-017-2883-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2883-1

Keywords

Navigation