Skip to main content
Log in

Deep learning based multimodal complex human activity recognition using wearable devices

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Wearable device based human activity recognition, as an important field of ubiquitous and mobile computing, is drawing more and more attention. Compared with simple human activity (SHA) recognition, complex human activity (CHA) recognition faces more challenges, e.g., various modalities of input and long sequential information. In this paper, we propose a deep learning model named DEBONAIR (Deep lEarning Based multimodal cOmplex humaN Activity Recognition) to address these problems, which is an end-to-end model extracting features systematically. We design specific sub-network architectures for different sensor data and merge the outputs of all sub-networks to extract fusion features. Then, a LSTM network is utilized to learn the sequential information of CHAs. We evaluate the model on two multimodal CHA datasets. The experiment results show that DEBONAIR is significantly better than the state-of-the-art CHA recognition models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Guo H, Chen L, Shen Y, Chen G (2014) Activity recognition exploiting classifier level fusion of acceleration and physiological signals. In Proceedings of the 16th International Joint Conference on Pervasive and Ubiquitous Computing, 63–66

  2. Gupta P, Dallas T (2014) Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Trans Biomed Eng 61(6):1780–1786

    Article  Google Scholar 

  3. Lara OD, Pérez AJ, Labrador MA, Posada JD (2012) Centinela: a human activity recognition system based on acceleration and vital sign data. Pervasive and Mobile Computing 8(5):717–729

    Article  Google Scholar 

  4. Ravi N, Dandekar N, Mysore P, Littman ML (2005) Activity recognition from accelerometer data. In Proceedings of the 20th Association for the Advance of Artificial Intelligence, 1541–1546

  5. Taraldsen K, Chastin SF, Riphagen II, Vereijken B, Helbostad JL (2012) Physical activity monitoring by use of accelerometer-based body-worn sensors in older adults: a systematic literature review of current knowledge and applications. Maturitas 7(1):13–19

    Article  Google Scholar 

  6. Chen L, Nugent CD, Wang H (2012) A knowledge-driven approach to activity recognition in smart homes. IEEE Trans Knowl Data Eng 24(6):961–974

    Article  Google Scholar 

  7. Bao L, Intille SS (2004) Activity recognition from user-annotated acceleration data. In Proceedings of the 2nd International Conference on Pervasive Computing, 1–17

  8. Dernbach S, Das B, Krishnan NC, Thomas BL, Cook DJ (2012) Simple and complex activity recognition through smart phones. In Proceedings of the 8th International Conference on Intelligent Environments, 214–221

  9. Banke U, Schiele B (2009) Daily routine recognition through activity spotting. In Proceedings of the 4th International Symposium on Location-and Context-Awareness, 192–206

  10. Blanke U, Schiele B (2010) Remember and transfer what you have learned-recognizing composite activities based on activity spotting. In Proceeding of the 14th International Symposium on Wearable Computers, 1–8

  11. Liu L, Peng Y, Liu M, Huang Z (2015) Sensor-based human activity recognition system with a multilayered model using time series shapelets. Knowl-Based Syst 90:138–152

    Article  Google Scholar 

  12. Liu L, Peng Y, Wang S, Liu M, Huang Z (2016) Complex activity recognition using time series pattern dictionary learned from ubiquitous sensors. Inf Sci 340:41–57

    Article  MathSciNet  Google Scholar 

  13. Yan Z, Chakraborty D, Misra A, Jeung H, Aberer K (2012) Sammple: Detecting semantic indoor activities in practical settings using locomotive signatures. In Proceeding of 16th International Symposium on Wearable Computers, 37–40

  14. Yan Z, Chakraborty D, Mittal S, Misra A, Aberer K (2013) An exploration with online complex activity recognition using cellphone accelerometer. In Proceeding of the 15th International Conference on Pervasive and Ubiquitous Computing, 199–202

  15. Huynh T, Fritz M, Schiele B (2008) Discovery of activity patterns using topic models. In Proceedings of the 10th International Conference on Ubiquitous Computing, 10–19

  16. Lv M, Chen L, Chen T, Chen G (2018) Bi-view semi-supervised learning based semantic human activity recognition using accelerometers. IEEE Trans Mob Comput 17(9):1991–2001

    Article  Google Scholar 

  17. Peng L, Chen L, Wu M, Chen G (2019) Complex activity recognition using acceleration, vital sign, and location data. IEEE Trans Mob Comput 18(7):1488–1498

    Article  Google Scholar 

  18. Peng L, Chen L, Wu X, Guo H, Chen G (2017) Hierarchical complex activity representation and recognition using topic model and classifier level fusion. IEEE Trans Biomed Eng 64(6):1369–1379

    Article  Google Scholar 

  19. Trabelsi D, Mohammed S, Chamroukhi F, Oukhellou L, Amirat Y (2013) An unsupervised approach for automatic activity recognition based on hidden Markov model regression. IEEE Trans Autom Sci Eng 10(3):829–835

    Article  Google Scholar 

  20. Guan X, Raich R, Wong WK (2016) Efficient multi-instance learning for activity recognition from time series data using an auto-regressive hidden Markov model. In Proceedings of the 33rd International Conference on Machine Learning, 2330–2339

  21. Martindale CF, Sprager S, Eskofier BM (2019) Hidden Markov model-based smart annotation for benchmark cyclic activity recognition database using wearables. Sensors 19(8):1820

    Article  Google Scholar 

  22. Tapia EM, Intille SS, Haskell W, Larson K, Wright J, King A, Friedman R (2007) Real-time recognition of physical activities and their intensities using wireless accelerometers and a heart rate monitor. In Proceedings of the 11th International Symposium on Wearable Computers, 37–40

  23. Guo M, Wang Z, Yang N, Li Z, An T (2019) A multisensor multiclassifier hierarchical fusion model based on entropy weight for human activity recognition using wearable inertial sensors. IEEE Transactions on Human-Machine Systems 49(1):105–111

    Article  Google Scholar 

  24. Guo H, Chen L, Peng L, Chen G (2016) Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble. In Proceedings of the 18th International Joint Conference on Pervasive and Ubiquitous Computing, 1112–1123

  25. Guan Y, Plötz T (2017) Ensembles of deep LSTM learners for activity recognition using wearables. In Proceedings of the 19th ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(2): 11

  26. Ordóñez FJ, Roggen D (2016) Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1):115

    Article  Google Scholar 

  27. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, Zhang J (2014) Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services, 197–205

  28. Zheng Y, Liu Q, Chen E, Ge Y, Zhao JL (2014) Time series classification using multi-channels deep convolutional neural networks. In Proceedings of the 15th International Conference on Web-Age Information Management, 298–310

  29. Dauphin Y, Vries H, Bengio Y (2015) Equilibrated adaptive learning rates for non-convex optimization. In Proceedings of the 29th Advances in Neural Information Processing System, 1504–1512

  30. Yang J, Nguyen MN, San PP, Li X, Krishnaswamy S (2015) Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, 25–31

  31. Huynh T, Schiele B (2005) Analyzing features for activity recognition. In Proceedings of the 1st Joint Conference on Smart Objects and Ambient intelligence: Innovative Context-aware Services: Usages and Technologies, 159–163

  32. Bulling A, Blanke U, Schiele B (2014) A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput Surv 46(3):33

    Article  Google Scholar 

  33. Lin J, Keogh E, Lonardi S, Chiu B (2003) A symbolic representation of time series, with implications for streaming algorithms. In Proceedings of the 8th Special Interest Group on Management of Data, 2–11

  34. Münzner S, Schmidt P, Reiss A, Hanselmann M, Stiefelhagen R, Dürichen R (2017) CNN-based sensor fusion techniques for multimodal human activity recognition. In Proceedings of the 21st ACM International Symposium on Wearable Computers, 158–165

  35. Chen L, Zhang Y, Peng L (2020) METIER: a deep multi-task learning based activity and user recognition model using wearable sensors. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1): 1–18

  36. Yao S, Hu S Zhao Y, Zhang A, Abdelzaher T (2017) Deepsense: a unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the International Conference on World Wide Web, 351–360

  37. Radu V, Tong C, Bhattacharya S, Lane ND, Mascolo C, Marina MK, Kawsar F (2018) Multimodal deep learning for activity and context recognition. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4): 157

  38. Shoaib M, Bosch S, Incel OD, Scholten H, Havinga PJ (2016) Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors 16(4):426

    Article  Google Scholar 

  39. Peng L, Chen L, Ye Z, Zhang Y (2018) AROMA: A deep multi-task learning based simple and complex human activity recognition method using wearable sensors. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(2): 74

  40. Boureau YL, Ponce J, LeCun Y (2010) A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th International Conference on Machine Learning, 111–118

  41. Reiss A, Stricker D (2012) Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 16th International Symposium on Wearable Computers, 108–109

  42. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, 1–14

  43. Lin M, Chen Q, Yan S (2014) Network in network. In Proceedings of the 2nd International Conference on Learning Representations, 1–10

  44. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  45. Gao L, Li X, Song J, Shen HT (2019) Hierarchical LSTMs with adaptive attention for visual captioning. IEEE Trans Pattern Anal Mach Intell 42(5):1112–1131

    Google Scholar 

  46. Song J, Guo Y, Gao L, Li X, Hanjalic A, Shen HT (2019) From deterministic to generative: multimodal stochastic RNNs for video captioning. IEEE Transactions on Neural Networks and Learning Systems 30(10):3047–3058

  47. Andrej K, Justin J, Li FF (2016) Visualizing and understanding recurrent networks. In Proceedings of the 4th International Conference on Learning Representations, 1–12

  48. Sainath TN, Vinyals O, Senior A, Sak H (2015) Convolutional, long short-term memory, fully connected deep neural networks. In Proceeding of the 40th International Conference on Acoustics, Speech and Signal Processing, 4580–4584

  49. Hammerla NY, Plötz T (2015) Let’s (not) stick together: Pairwise similarity biases cross-validation in activity recognition. In Proceedings of the 17th International Joint Conference on Pervasive and Ubiquitous Computing, 1041–1051

  50. Hammerla NY, Halloran S, Plötz T (2016) Deep, convolutional, and recurrent models for human activity recognition using wearables. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, 1533–1540

  51. Yao S, Zhao Y, Shao H, Liu D, Liu S, Hao Y, Piao A, Hu S, Lu S, Abdelzaher T (2019) SADeepSense: Self-attention deep learning framework for heterogeneous on-device sensors in internet of things applications. In Proceedings of the 38th IEEE Conference on Computer Communications, 1243–1251

Download references

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities (No. 2020QNA5017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ling Chen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Liu, X., Peng, L. et al. Deep learning based multimodal complex human activity recognition using wearable devices. Appl Intell 51, 4029–4042 (2021). https://doi.org/10.1007/s10489-020-02005-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-02005-7

Keywords

Navigation