Skip to main content

Predictive Analytics for Recognizing Human Activities Using Residual Network and Fine-Tuning

  • Conference paper
  • First Online:
Big Data Analytics (BDA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13147))

Included in the following conference series:

Abstract

Human Action Recognition (HAR) is a rapidly growing study area in computer vision due to its wide applicability. Because of their varied appearance and the broad range of stances that they can assume, detecting individuals in images is a difficult undertaking. Due to its superior performance over existing machine learning methods and high universality over raw inputs, deep learning is now widely used in a range of study fields. For many visual recognition tasks, the depth of representations is critical. For better model robustness and performance, more complex features can represent using deep neural networks but the training of these model are hard due to vanishing gradients problem. The use of skip connections in residual networks (ResNet) helps to address this problem and easy to learn identity function by residual block. So, ResNet overcomes the performance degradation issue with deep networks. This paper proposes an intelligent human action recognition system using residual learning-based framework “ResNet-50” with transfer learning which can automatically recognize daily human activities. The proposed work presents extensive empirical evidence demonstrating that residual networks are simpler to optimize and can gain accuracy from significantly higher depth. The experiments are performed using the UTKinect Action-3D public dataset of human daily activities. According to the experimental results, the proposed system outperforms other state-of-the-art methods and recorded high recognition accuracy of 98.25% with a 0.11 loss score in 200 epochs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aggarwal, J.K., Cai, Q., Liao, W., Sabata, B.: Nonrigid motion analysis: articulated and elastic motion. Comput. Vis. Image Underst. 70(2), 142–156 (1998)

    Article  Google Scholar 

  2. Haering, N., Venetianer, P.L., Lipton, A.: The evolution of video surveillance: an overview. Mach. Vis. Appl. 19(5–6), 279–290 (2008)

    Article  Google Scholar 

  3. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014)

    Google Scholar 

  4. Kumar, K.: EVS-DK: event video skimming using deep keyframe. J. Vis. Commun. Image Represent. 58, 345–352 (2019)

    Article  Google Scholar 

  5. Kumar, K., Shrimankar, D.D.: F-DES: fast and deep event summarization. IEEE Trans. Multimedia 20(2), 323–334 (2017)

    Article  Google Scholar 

  6. Liu, A.A., Nie, W.Z., Su, Y.T., Ma, L., Hao, T., Yang, Z.X.: Coupled hidden conditional random fields for RGB-D human action recognition. Signal Process. 112, 74–82 (2015)

    Article  Google Scholar 

  7. Liu, Z., Feng, X., Tian, Y.: An effective view and time-invariant action recognition method based on depth videos. In: 2015 Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2015)

    Google Scholar 

  8. McNally, W., Wong, A., McPhee, J.: Star-net: action recognition using spatio-temporal activation reprojection. In: 2019 16th Conference on Computer and Robot Vision (CRV), pp. 49–56. IEEE (2019)

    Google Scholar 

  9. Negi, A., Chauhan, P., Kumar, K., Rajput, R.: Face mask detection classifier and model pruning with keras-surgeon. In: 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), pp. 1–6. IEEE (2020)

    Google Scholar 

  10. Negi, A., Kumar, K., Chauhan, P., Rajput, R.: Deep neural architecture for face mask detection on simulated masked face dataset against COVID-19 pandemic. In: 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 595–600. IEEE (2021)

    Google Scholar 

  11. Phyo, C.N., Zin, T.T., Tin, P.: Deep learning for recognizing human activities using motions of skeletal joints. IEEE Trans. Consum. Electron. 65(2), 243–252 (2019)

    Article  Google Scholar 

  12. Popoola, O.P., Wang, K.: Video-based abnormal human behavior recognition-a review. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(6), 865–878 (2012)

    Google Scholar 

  13. Taylor, G.W., Fergus, R., LeCun, Y., Bregler, C.: Convolutional learning of spatio-temporal features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 140–153. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15567-3_11

    Chapter  Google Scholar 

  14. Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3D skeletons as points in a lie group. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 588–595 (2014)

    Google Scholar 

  15. Verma, K.K., Singh, B.M., Mandoria, H.L., Chauhan, P.: Two-stage human activity recognition using 2D-convnet. Int. J. Interact. Multimedia Artif. Intell. 6(2) (2020)

    Google Scholar 

  16. Xia, L., Chen, C.C., Aggarwal, J.K.: View invariant human action recognition using histograms of 3D joints. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–27. IEEE (2012)

    Google Scholar 

  17. Zhao, C., Chen, M., Zhao, J., Wang, Q., Shen, Y.: 3D behavior recognition based on multi-modal deep space-time learning. Appl. Sci. 9(4), 716 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krishan Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Negi, A., Kumar, K., Chaudhari, N.S., Singh, N., Chauhan, P. (2021). Predictive Analytics for Recognizing Human Activities Using Residual Network and Fine-Tuning. In: Srirama, S.N., Lin, J.CW., Bhatnagar, R., Agarwal, S., Reddy, P.K. (eds) Big Data Analytics. BDA 2021. Lecture Notes in Computer Science(), vol 13147. Springer, Cham. https://doi.org/10.1007/978-3-030-93620-4_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93620-4_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93619-8

  • Online ISBN: 978-3-030-93620-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics