Skip to main content

Human Activity Recognition with IMU and Vital Signs Feature Fusion

  • Conference paper
  • First Online:
  • 2081 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13141))

Abstract

Combining data from different sources into an integrated view is a recent trend taking advantage of the Internet of Things (IoT) evolution over the last years. The fusion of different modalities has applications in various fields, including healthcare and security systems. Human activity recognition (HAR) is among the most common applications of a healthcare or eldercare system. Inertial measurement unit (IMU) wearable sensors, like accelerometers and gyroscopes, are often utilized for HAR applications. In this paper, we investigate the performance of wearable IMU sensors along with vital signs sensors for HAR. A massive feature extraction, including both time and frequency domain features and transitional features for the vital signs, along with a feature selection method were performed. The classification algorithms and different early and late fusion methods were applied to a public dataset. Experimental results revealed that both IMU and vital signs achieve reasonable HAR accuracy and F1-score among all the classes. Feature selection significantly reduced the number of features from both IMU and vital signs features while also improved the classification accuracy. The rest of the early and late level fusion methods also performed better than each modality alone, reaching an accuracy level of up to 95.32%.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://rea-project.gr/en/home-en/.

  2. 2.

    https://xr4drama.eu.

References

  1. Chen, J., Sun, Y., Sun, S.: Improving human activity recognition performance by data fusion and feature engineering. Sensors 21(3), 692 (2021)

    Article  Google Scholar 

  2. Chen, L., Liu, X., Peng, L., Wu, M.: Deep learning based multimodal complex human activity recognition using wearable devices. Appl. Intell. 51(6), 4029–4042 (2020). https://doi.org/10.1007/s10489-020-02005-7

    Article  Google Scholar 

  3. Cornacchia, M., Ozcan, K., Zheng, Y., Velipasalar, S.: A survey on activity detection and classification using wearable sensors. IEEE Sens. J. 17(2), 386–403 (2016)

    Article  Google Scholar 

  4. Doewes, A., Swasono, S.E., Harjito, B.: Feature selection on human activity recognition dataset using minimum redundancy maximum relevance. In: 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp. 171–172. IEEE (2017)

    Google Scholar 

  5. Dua, N., Singh, S.N., Semwal, V.B.: Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 103(7), 1461–1478 (2021). https://doi.org/10.1007/s00607-021-00928-8

    Article  Google Scholar 

  6. Giannakeris, P., et al.: Fusion of multimodal sensor data for effective human action recognition in the service of medical platforms. In: Lokoč, J., et al. (eds.) MMM 2021. LNCS, vol. 12573, pp. 367–378. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67835-7_31

    Chapter  Google Scholar 

  7. Kasnesis, P., Chatzigeorgiou, C., Patrikakis, C.Z., Rangoussi, M.: Modality-wise relational reasoning for one-shot sensor-based activity recognition. Pattern Recogn. Lett. 146, 90–99 (2021)

    Article  Google Scholar 

  8. Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2012)

    Article  Google Scholar 

  9. Lara, O.D., Pérez, A.J., Labrador, M.A., Posada, J.D.: Centinela: a human activity recognition system based on acceleration and vital sign data. Pervasive Mob. Comput. 8(5), 717–729 (2012)

    Article  Google Scholar 

  10. Maghsoudi, Y., Alimohammadi, A., Zoej, M.V., Mojaradi, B.: Weighted combination of multiple classifiers for the classification of hyperspectral images using a genetic algorithm. In: ISPRS Commission I Symposium From Sensors to Imagery (2006)

    Google Scholar 

  11. Nweke, H.F., Teh, Y.W., Mujtaba, G., Al-Garadi, M.A.: Data fusion and multiple classifier systems for human activity detection and health monitoring: review and open research directions. Inf. Fusion 46, 147–170 (2019)

    Article  Google Scholar 

  12. Reiss, A., Stricker, D.: Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th International Symposium on Wearable Computers, pp. 108–109. IEEE (2012)

    Google Scholar 

  13. Rosati, S., Balestra, G., Knaflitz, M.: Comparison of different sets of features for human activity recognition by wearable sensors. Sensors 18(12), 4189 (2018)

    Article  Google Scholar 

  14. Saha, J., Chowdhury, C., Biswas, S.: Two phase ensemble classifier for smartphone based human activity recognition independent of hardware configuration and usage behaviour. Microsyst. Technol. 24(6), 2737–2752 (2018). https://doi.org/10.1007/s00542-018-3802-9

    Article  Google Scholar 

  15. Sapra, D., Pimentel, A.D.: Constrained evolutionary piecemeal training to design convolutional neural networks. In: Fujita, H., Fournier-Viger, P., Ali, M., Sasaki, J. (eds.) IEA/AIE 2020. LNCS (LNAI), vol. 12144, pp. 709–721. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55789-8_61

    Chapter  Google Scholar 

  16. Steven Eyobu, O., Han, D.S.: Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 18(9), 2892 (2018)

    Article  Google Scholar 

  17. Wan, S., Qi, L., Xu, X., Tong, C., Gu, Z.: Deep learning models for real-time human activity recognition with smartphones. Mob. Netw. Appl. 25(2), 743–755 (2020)

    Article  Google Scholar 

  18. Wu, T., Chen, Y., Gu, Y., Wang, J., Zhang, S., Zhechen, Z.: Multi-layer cross loss model for zero-shot human activity recognition. In: Lauw, H.W., Wong, R.C.-W., Ntoulas, A., Lim, E.-P., Ng, S.-K., Pan, S.J. (eds.) PAKDD 2020. LNCS (LNAI), vol. 12084, pp. 210–221. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47426-3_17

    Chapter  Google Scholar 

  19. Xefteris, V.R., Tsanousa, A., Meditskos, G., Vrochidis, S., Kompatsiaris, I.: Performance, challenges, and limitations in multimodal fall detection systems: a review. IEEE Sens. J. 21, 18398–18409 (2021)

    Article  Google Scholar 

  20. Zhang, M., Sawchuk, A.A.: A feature selection-based framework for human activity recognition using wearable multimodal sensors. In: BodyNets, pp. 92–98 (2011)

    Google Scholar 

  21. Zhu, J., San-Segundo, R., Pardo, J.M.: Feature extraction for robust physical activity recognition. HCIS 7(1), 1–16 (2017). https://doi.org/10.1186/s13673-017-0097-2

    Article  Google Scholar 

Download references

Acknowledgment

This research was supported by the xR4DRAMA project (grant agreement No 952133), which is funded by the European Union’s Horizon 2020 research and innovation programme and by the REA project (project code: T1EDK- 00686), co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE - INNOVATE.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vasileios-Rafail Xefteris .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xefteris, VR., Tsanousa, A., Mavropoulos, T., Meditskos, G., Vrochidis, S., Kompatsiaris, I. (2022). Human Activity Recognition with IMU and Vital Signs Feature Fusion. In: Þór Jónsson, B., et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol 13141. Springer, Cham. https://doi.org/10.1007/978-3-030-98358-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98358-1_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98357-4

  • Online ISBN: 978-3-030-98358-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics