Skip to main content
Log in

Multi-sensor human activity recognition using CNN and GRU

  • Regular Paper
  • Published:
International Journal of Multimedia Information Retrieval Aims and scope Submit manuscript

Abstract

In the current era of rapid technological innovation, human activity recognition (HAR) has emerged as a principal research area in the field of multimedia information retrieval. The capacity to monitor people remotely is a main determinant of HAR’s central role. Multiple gyroscope and accelerometer sensors can be used to aggregate data which can be used to recognise human activities—one of the key research objectives of this study. Optimal results are attained through the use of deep learning models to carry out HAR in the collected data. We propose the use of a hierarchical multi-resolution convolutional neural networks in combination with gated recurrent uni. We conducted an experiment on the mHealth and UCI data sets, the results of which demonstrate the efficiency of the proposed model, as it achieved acceptable accuracies: 99.35% in the mHealth data set and 94.50% in the UCI data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Kankanhalli MS, Rui Y (2008) Application potential of multimedia information retrieval. Proc IEEE 96(4):712–720

    Article  Google Scholar 

  2. Lu W (2020) An empirical evaluation of deep learning techniques for human activity recognition (Doctoral dissertation, Auckland University of Technology)

  3. Bouchabou D, Nguyen SM, Lohr C, LeDuc B, Kanellos I (2021) A survey of human activity recognition in smart homes based on IoT sensors algorithms: taxonomies, challenges, and opportunities with deep learning. Sensors 21(18):6037

    Article  Google Scholar 

  4. Chen K, Zhang D, Yao L, Guo B, Yu Z, Liu Y (2021) Deep learning for sensor-based human activity recognition: overview, challenges, and opportunities. ACM Comput Surv (CSUR) 54(4):1–40

    Google Scholar 

  5. Dua N, Singh SN, Semwal VB (2021) Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 66:1–18

    Google Scholar 

  6. Hanif M, Akram T, Shahzad A, Khan M, Tariq U, Choi J, Nam Y, Zulfiqar Z (2022) Smart devices based multisensory approach for complex human activity recognition. Comput Mater Contin 70:3221–3234. https://doi.org/10.32604/cmc.2022.019815

    Article  Google Scholar 

  7. Sun B, Kong D, Wang S, Wang L, Yin B (2021) Joint transferable dictionary learning and view adaptation for multi-view human action recognition. ACM Trans Knowl Discov Data 15(2):1–23

    Article  Google Scholar 

  8. Khowaja SA, Yahya BN, Lee SL (2017) Hierarchical classification method based on selective learning of slacked hierarchy for activity recognition systems. Expert Syst Appl 88:165–177

    Article  Google Scholar 

  9. O’Halloran J, Curry E (2019) A comparison of deep learning models in human activity recognition and behavioural prediction on the MHEALTH dataset. In: AICS, pp 212–223

  10. Das DB, BIrant D (2021) Ordered physical human activity recognition based on ordinal classification. Turk J Electr Eng Comput Sci 29(5):2416–2436

    Article  Google Scholar 

  11. Deotale D, Verma M, Perumbure S, Jangir S, Kaur M, Mohammed Ali SA, Alshazly H (2021) HARTIV: human activity recognition using temporal information in videos. Comput Mater Contin. https://doi.org/10.32604/cmc.2022.020655

    Article  Google Scholar 

  12. Canizo M, Triguero I, Conde A, Onieva E (2019) Multi-head CNN-RNN for multi-time series anomaly detection: an industrial case study. Neurocomputing 363:246–260

    Article  Google Scholar 

  13. Ahmad Z, Khan N (2021) Inertial sensor data to image encoding for human action recognition. IEEE Sens J 21(9):10978–10988

    Article  Google Scholar 

  14. Dong M, Fang Z, Li Y, Bi S, Chen J (2021) AR3D: attention residual 3d network for human action recognition. Sensors 21(5):1656

    Article  Google Scholar 

  15. Mutegeki R, Han DS (2020) A CNN-LSTM approach to human activity recognition. In: 2020 International conference on artificial intelligence in information and communication (ICAIIC). IEEE, pp 362–366

  16. Xia K, Huang J, Wang H (2020) LSTM-CNN architecture for human activity recognition. IEEE Access 8:56855–56866

    Article  Google Scholar 

  17. Singh T, Vishwakarma DK (2021) A deeply coupled ConvNet for human activity recognition using dynamic and RGB images. Neural Comput Appl 33(1):469–485

    Article  Google Scholar 

  18. Qin Z, Zhang Y, Meng S, Qin Z, Choo KKR (2020) Imaging and fusing time series for wearable sensor-based human activity recognition. Inf Fusion 53:80–87

    Article  Google Scholar 

  19. Nafea O, Abdul W, Muhammad G, Alsulaiman M (2021) Sensor-based human activity recognition with spatio-temporal deep learning. Sensors 21(6):2141

    Article  Google Scholar 

  20. Onyekpe U, Palade V, Kanarachos S, Christopoulos SRG (2021) A quaternion gated recurrent unit neural network for sensor fusion. Information 12(3):117

    Article  Google Scholar 

  21. Okai J, Paraschiakos S, Beekman M, Knobbe A, de Sá CR (2019) Building robust models for human activity recognition from raw accelerometers data using gated recurrent units and long short term memory neural networks. In: 2019 41st Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 2486–2491

  22. Banos O, Garcia R, Holgado JA, Damas M, Pomares H, Rojas I, Saez A, Villalonga C (2014) mHealthDroid: a novel framework for agile development of mobile health applications. In: Proceedings of the 6th international work-conference on ambient assisted living an active ageing (IWAAL 2014), Belfast, Northern Ireland

  23. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL (2013) A public domain dataset for human activity recognition using smartphones. In: Esann, vol 3, p 3

  24. Cosma G, Mcginnity TM (2019) Feature extraction and classification using leading eigenvectors: applications to biomedical and multi-modal mHealth data. IEEE Access 7:107400–107412

    Article  Google Scholar 

  25. Brophy E, Veiga JJD, Wang Z, Smeaton AF, Ward TE (2018) An interpretable machine vision approach to human activity recognition using photoplethysmograph sensor data. arXiv preprint arxiv:1812.00668

Download references

Acknowledgements

The authors extend their appreciation to Researchers Supporting Project Number (RSP-2021/34), King Saud University, Riyadh, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ohoud Nafea.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nafea, O., Abdul, W. & Muhammad, G. Multi-sensor human activity recognition using CNN and GRU. Int J Multimed Info Retr 11, 135–147 (2022). https://doi.org/10.1007/s13735-022-00234-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13735-022-00234-9

Keywords

Navigation