Skip to main content

Multimodal Body Sensor for Recognizing the Human Activity Using DMOA Based FS with DL

  • Conference paper
  • First Online:
Mining Intelligence and Knowledge Exploration (MIKE 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13924))

  • 191 Accesses

Abstract

The relevance of automated recognition of human behaviors or actions stems from the breadth of its potential uses, which includes, but is not limited to, surveillance, robots, and personal health monitoring. Several computer vision-based approaches for identifying human activity in RGB and depth camera footage have emerged in recent years. Techniques including space-time trajectories, motion indoctrination, key pose extraction, tenancy patterns in 3D space, motion maps in depth, and skeleton joints are all part of the mix. These camera-based methods can only be used inside a constrained area and are vulnerable to changes in lighting and clutter in the backdrop. Although wearable inertial sensors offer a potential answer to these issues, they are not without drawbacks, including a reliance on the user’s knowledge of their precise location and orientation. Several sensing modalities are being used for reliable human action detection due to the complimentary nature of the data acquired from the sensors. This research therefore introduces a two-tiered hierarchical approach to activity recognition by employing a variety of wearable sensors. Dwarf mongoose optimization process is used to extract the handmade features and pick the best features (DMOA). It predicts the composite’s behavior by emulating how DMO searches for food. The DMO hive is divided into an alpha group, scouts, and babysitters. Every community has a different strategy to corner the food supply. In this study, we tested out a number of different methods for video categorization and action identification, including ConvLSTM, LRCN and C3D. The projected human action recognition (HAR) framework is evaluated using the UTD-MHAD dataset, which is a multimodal collection of 27 different human activities that is available to the public. The suggested feature selection model for HAR is trained and tested using a variety of classifiers. It has been shown experimentally that the suggested technique outperforms in terms of recognition accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Yadav, S.K., Tiwari, K., Pandey, H.M., Akbar, S.A.: A review of multimodal human activity recognition with special emphasis on classification, applications, challenges, and future directions. Knowl.-Based Syst. 223, 106970 (2021)

    Article  Google Scholar 

  2. Zhao, H., Miao, X., Liu, R., Fortin, G.: Multi-sensor information fusion based on machine learning for real applications in human activity recognition: state-of-the-art and research challenges. Inf. Fusion 80, 241–265 (2022)

    Article  Google Scholar 

  3. Ferrari, A., Mocci, D., Mobile, M., Napolitano, P.: Trends in human activity recognition using smartphones. J. Reliable Intell. Environ. 7(3), 189–213 (2021)

    Article  Google Scholar 

  4. Islam, M.M., Iqbal, T.: Multi-gat: a graphical attention-based hierarchical multimodal representation learning approach for human activity recognition. IEEE Robot. Autom. Lett. 6(2), 1729–1736 (2021)

    Article  Google Scholar 

  5. Rani, S., Babar, H., Coleman, S., Singh, A., Allandale, H.M.: An efficient and lightweight deep learning model for human activity recognition using smartphones. Sensors 21(11), 3845 (2021)

    Article  Google Scholar 

  6. Khan, I.U., Afzal, S., Lee, J.W.: Human activity recognition via hybrid deep learning based model. Sensors 22(1), 323 (2022)

    Article  Google Scholar 

  7. Challa, S.K., Kumar, A., Samwell, V.B.: A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. Vis. Comput. 38(12), 4095–4109 (2022)

    Article  Google Scholar 

  8. Xiao, Z., Xu, X., Xing, H., Song, F., Wang, X., Zhao, B.: A federated learning system with enhanced feature extraction for human activity recognition. Knowl.-Based Syst. 229, 107338 (2021)

    Article  Google Scholar 

  9. Zhang, S., et al.: Deep learning in human activity recognition with wearable sensors: a review on advances. Sensors 22(4), 1476 (2022)

    Article  Google Scholar 

  10. Ramanujan, E., Perumal, T., Padmavathi, S.: Human activity recognition with smartphone and wearable sensors using deep learning techniques: a review. IEEE Sens. J. 21(12), 13029–13040 (2021)

    Article  Google Scholar 

  11. Wang, D., Yang, J., Cui, W., Xie, L., Sun, S.: Multimodal CSI-based human activity recognition using GANs. IEEE Internet Things J. 8(24), 17345–17355 (2021)

    Article  Google Scholar 

  12. Hamad, R.A., Kimura, M., Yang, L., Woo, W.L., Wei, B.: Dilated causal convolution with multi-head self-attention for sensor human activity recognition. Neural Comput. Appl. 33, 13705–13722 (2021)

    Article  Google Scholar 

  13. Gu, F., Chung, M.H., Chignell, M., Valaee, S., Zhou, B., Liu, X.: A survey on deep learning for human activity recognition. ACM Comput. Surv. (CSUR) 54(8), 1–34 (2021)

    Article  Google Scholar 

  14. Garcia, K.D., et al.: An ensemble of autonomous auto-encoders for human activity recognition. Neurocomputing 439, 271–280 (2021)

    Article  Google Scholar 

  15. Tasnim, N., Islam, M.K., Baek, J.H.: Deep learning based human activity recognition using spatio-temporal image formation of skeleton joints. Appl. Sci. 11(6), 2675 (2021)

    Article  Google Scholar 

  16. Pradhan, A., Srivastava, S.: Hierarchical extreme puzzle learning machine-based emotion recognition using multimodal physiological signals. Biomed. Signal Process. Control 83, 104624 (2023)

    Article  Google Scholar 

  17. Dahou, A., Chelloug, S.A., Alduailij, M., Elaziz, M.A.: Improved feature selection based on chaos game optimization for social internet of things with a novel deep learning model. Mathematics 11(4), 1032 (2023)

    Article  Google Scholar 

  18. Islam, M.M., Nooruddin, S., Karray, F., Muhammad, G.: Multi-level feature fusion for multimodal human activity recognition in internet of healthcare things. Inf. Fusion 94, 17–31 (2023)

    Article  Google Scholar 

  19. Zhang, Y., Yao, X., Fei, Q., Chen, Z.: Smartphone sensors-based human activity recognition using feature selection and deep decision fusion. IET Cyber-Phys. Syst.: Theory Appl. 8, 76–90 (2023)

    Article  Google Scholar 

  20. Muhammad, F., Hussain, M., Aboalsamh, H.: A bimodal emotion recognition approach through the fusion of electroencephalography and facial sequences. Diagnostics 13(5), 977 (2023)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Likhitha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, M.R., Likhitha, A., Komali, A., Keerthana, D., Gowthami, G. (2023). Multimodal Body Sensor for Recognizing the Human Activity Using DMOA Based FS with DL. In: Kadry, S., Prasath, R. (eds) Mining Intelligence and Knowledge Exploration. MIKE 2023. Lecture Notes in Computer Science(), vol 13924. Springer, Cham. https://doi.org/10.1007/978-3-031-44084-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44084-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44083-0

  • Online ISBN: 978-3-031-44084-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics