Skip to main content

Advertisement

Log in

The Future of Human Activity Recognition: Deep Learning or Feature Engineering?

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A significant gap exists in our knowledge of how domain-specific feature extraction compares to unsupervised feature learning in the latent space of a deep neural network for a range of temporal applications including human activity recognition (HAR). This paper aims to address this gap specifically for fall detection and motion recognition using acceleration data. To ensure reproducibility, we use a publicly available dataset, UniMiB-SHAR, with a well-established history in the HAR literature. We methodically analyze the performance of 64 different combinations of (i) learning representations (in the form of raw temporal data or extracted features), (ii) traditional and modern classifiers with different topologies on (iii) both binary (fall detection) and multi-class (daily activities of living) datasets. We report and discuss our findings and conclude that while feature engineering may still be competitive for HAR, trainable front-ends of modern deep learning algorithms can benefit from raw temporal data especially in large quantities. In fact, this paper claims state-of-the-art where we significantly outperform the most recent literature on this dataset in both activity recognition (88.41% vs. 98.02%) and fall detection (98.71% vs. 99.82%) using raw temporal input.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Abo-Tabik M, Costen N, Darby J, Benn Y (2020) Towards a smart smoking cessation app: a 1D-CNN model predicting smoking events. Sensors 20(4):1099

    Article  Google Scholar 

  2. Abuadbba S, Kim K, Kim M, Thapa C, Camtepe SA, Gao Y, Kim H, Nepal S (2020) Can we use split learning on 1D CNN models for privacy preserving training? arXiv preprint arXiv:2003.12365

  3. Arel I, Rose DC, Karnowski TP (2010) Deep machine learning—a new frontier in artificial intelligence research [research frontier]. IEEE Comput Intell Mag 5(4):13–18

    Article  Google Scholar 

  4. Bartlett J, Prabhu V, Whaley J (2017) Acctionnet: a dataset of human activity recognition using on-phone motion sensors. In: Proceedings of the 34th international conference on machine learning (Sydney, Australia, 2017)

  5. Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin

    MATH  Google Scholar 

  6. Brezmes T, Gorricho JL, Cotrina J (2009) Activity recognition from accelerometer data on a mobile phone. In: International work-conference on artificial neural networks, pp. 796–799. Springer

  7. Cho H, Yoon SM (2018) Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors 18(4):1055

    Article  Google Scholar 

  8. Coates A, Ng A, Lee H (2011) An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215–223

  9. Deng L, Yu D (2014) Deep learning: methods and applications. Found Trends Signal Process 7(3–4):197–387

    Article  MathSciNet  Google Scholar 

  10. Farooq M, Sazonov E (2017) Feature extraction using deep learning for food type recognition. In: International conference on bioinformatics and biomedical engineering, pp. 464–472. Springer

  11. Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: Proceedings of the IEEE-INNS-ENNS international joint conference on neural networks. IJCNN 2000. Neural computing: new challenges and perspectives for the new millennium, 3:189–194. IEEE

  12. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge

    MATH  Google Scholar 

  13. Gupta P, Dallas T (2014) Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Trans Biomed Eng 61(6):1780–1786

    Article  Google Scholar 

  14. Ha S, Choi S (2016) Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In: 2016 International joint conference on neural networks (IJCNN), pp. 381–388. IEEE

  15. Hammerla NY, Halloran S, Plötz T (2016) Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880

  16. Hong C, Yu J, Tao D, Wang M (2014) Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval. IEEE Trans Ind Electron 62(6):3742–3751

    Google Scholar 

  17. Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24(12):5659–5670

    Article  MathSciNet  Google Scholar 

  18. Huynh T, Schiele B (2005) Analyzing features for activity recognition. In: Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, pp. 159–163

  19. Jiang Y, Bosch N, Baker RS, Paquette L, Ocumpaugh J, Andres JMAL, Moore AL, Biswas G (2018) Expert feature-engineering vs. deep neural networks: which is better for sensor-free affect detection? In: International conference on artificial intelligence in education, pp. 198–211. Springer

  20. Khurana U, Samulowitz H, Turaga D (2018) Feature engineering for predictive modeling using reinforcement learning. In: Thirty-second AAAI conference on artificial intelligence

  21. Kilinc O, Dalzell A, Uluturk I, Uysal I (2015) Inertia based recognition of daily activities with anns and spectrotemporal features. In: 2015 IEEE 14th international conference on machine learning and applications (ICMLA), pp. 733–738. IEEE

  22. Kiranyaz S, Avci O, Abdeljaber O, Ince T, Gabbouj M, Inman DJ (2019) 1D convolutional neural networks and applications: a survey. arXiv preprint arXiv:1905.03554

  23. Kwapisz JR, Weiss GM, Moore SA (2011) Activity recognition using cell phone accelerometers. ACM SigKDD Explor Newslett 12(2):74–82

    Article  Google Scholar 

  24. Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th annual international conference on machine learning, pp. 609–616

  25. Liu CT, Wu YH, Lin YS, Chien SY (2018) Computation-performance optimization of convolutional neural networks with redundant kernel removal. In: 2018 IEEE international symposium on circuits and systems (ISCAS), pp. 1–5. IEEE

  26. Lundervold AS, Lundervold A (2019) An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik 29(2):102–127

    Article  Google Scholar 

  27. McFee B, Raffel C, Liang D, Ellis DP, McVicar M, Battenberg E, Nieto O (2015) librosa: audio and music signal analysis in python. In: Proceedings of the 14th python in science conference, vol 8

  28. Meyes R, Donauer J, Schmeing A, Meisen T (2019) A recurrent neural network architecture for failure prediction in deep drawing sensory time series data. Proc Manuf 34:789–797

    Google Scholar 

  29. Micucci D, Mobilio M, Napoletano P (2017) Unimib shar: a dataset for human activity recognition using acceleration data from smartphones. Appl Sci 7(10):1101

    Article  Google Scholar 

  30. Mobile Health (mhealth) technologies and global markets. Tech. Rep. HLC162B (2017). https://www.bccresearch.com/market-research/healthcare/mobile-health-technologies-report.html

  31. Murad A, Pyun JY (2017) Deep recurrent neural networks for human activity recognition. Sensors 17(11):2556

    Article  Google Scholar 

  32. Ogbuabor G, La R (2018) Human activity recognition for healthcare using smartphones. In: Proceedings of the 2018 10th international conference on machine learning and computing, pp. 41–46

  33. Park J, Lee J, Sim D (2020) Low-complexity CNN with 1D and 2D filters for super-resolution. J Real-Time Image Process 17(6):2065–2076

    Article  Google Scholar 

  34. Salakhutdinov R, Larochelle H (2010) Efficient learning of deep Boltzmann machines. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 693–700

  35. Siddiqi MH, Ali R, Rana M, Hong EK, Kim ES, Lee S et al (2014) Video-based human activity recognition using multilevel wavelet decomposition and stepwise linear discriminant analysis. Sensors 14(4):6370–6392

    Article  Google Scholar 

  36. Sun L, Zhang D, Li B, Guo B, Li S (2010) Activity recognition on an accelerometer embedded mobile phone with varying positions and orientations. In: International conference on ubiquitous intelligence and computing, pp. 548–562. Springer

  37. Syafrudin M, Alfian G, Fitriyani NL, Rhee J (2018) Performance analysis of IOT-based sensor, big data processing, and machine learning model for real-time monitoring system in automotive manufacturing. Sensors 18(9):2946

    Article  Google Scholar 

  38. Tang W, Long G, Liu L, Zhou T, Jiang J, Blumenstein M (2020) Rethinking 1D-CNN for time series classification: a stronger baseline. arXiv preprint arXiv:2002.10061

  39. Verma VK, Lin WY, Lee MY, Lai CS (2017) Levels of activity identification & sleep duration detection with a wrist-worn accelerometer-based device. In: 2017 39th Annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 2369–2372. IEEE

  40. Woznowski P, King R, Harwin W, Craddock I (2016) A human activity recognition framework for healthcare applications: ontology, labelling strategies, and best practice. In: IoTBD, pp. 369–377

  41. Yamashita R, Nishio M, Do RKG, Togashi K (2018) Convolutional neural networks: an overview and application in radiology. Insights Imaging 9(4):611–629

    Article  Google Scholar 

  42. Yu J, Tan M, Zhang H, Tao D, Rui Y (2019) Hierarchical deep click feature prediction for fine-grained image recognition. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2019.2932058

  43. Zhang J, Yu J, Tao D (2018) Local deep-feature alignment for unsupervised dimension reduction. IEEE Trans Image Process 27(5):2420–2432

    Article  MathSciNet  Google Scholar 

  44. Zhang M, Sawchuk AA (2012) Motion primitive-based human activity recognition using a bag-of-features approach. In: Proceedings of the 2nd ACM SIGHIT international health informatics symposium, pp. 631–640

  45. Zhang Y, Zhang Y, Zhang Z, Bao J, Song Y (2018) Human activity recognition based on time series analysis using U-Net. arXiv preprint arXiv:1809.08113

  46. Zhuang N, Qi GJ, Kieu TD, Hua KA (2019) Differential recurrent neural network and its application for human activity recognition. arXiv preprint arXiv:1905.04293

Download references

Acknowledgements

This work is sponsored in part by Florida High Tech Corridor Research Grant FHT 19-06 titled “Algorithmic Prediction and Recognition of Human Activity and Falls from Wireless Accelerometer Data”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ria Kanjilal.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest with the work and findings of this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kanjilal, R., Uysal, I. The Future of Human Activity Recognition: Deep Learning or Feature Engineering?. Neural Process Lett 53, 561–579 (2021). https://doi.org/10.1007/s11063-020-10400-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10400-x

Keywords

Navigation