Skip to main content

FASENet: A Two-Stream Fall Detection and Activity Monitoring Model Using Pose Keypoints and Squeeze-and-Excitation Networks

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13758))

Included in the following conference series:

  • 669 Accesses

Abstract

Numerous frameworks have already been proposed for vision-based fall detection and activity monitoring. These works have leveraged state-of-the-art algorithms such as 2D and 3D convolutional neural networks in order to analyze and process video data. However, these models are computationally expensive which prevent their use at scale for low-resource devices. Moreover, previous works in literature have not considered modelling features for simple and complex actions given a video segment. This information is crucial when trying to identify actions for a given task. Hence, this work proposes an architecture called FASENet, a 1D convolutional neural network-based two-stream fall detection and activity monitoring model using squeeze-and-excitation networks. By using pose keypoints as inputs for the model instead of raw video frames, it is able to use 1D convolutions which is computationally cheaper than using 2D or 3D convolutions, thereby making the architecture more efficient. Furthermore, FASENet primarily has two streams to process pose segments, a compact and dilated stream which aims to extract features for simple and complex actions, respectively. Furthermore, squeeze-and-excitation networks are used in between these streams to recalibrate features after their combination based on their importance. The network was evaluated in three publicly available datasets, the Adhikari Dataset, the UP-Fall Dataset, and the UR-Fall Dataset. Through the experiments, FASENet was able to outperform prior state-of-the-art work on the Adhikari Dataset for accuracy, precision, and F1. The model was shown to have the best precision on the UP-Fall and UR-Fall Datasets. Finally, it was also observed that FASENet was able to reduce false positive rates compared to a previously related study.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://falldataset.com.

  2. 2.

    http://sites.google.com/up.edu.mx/har-up/.

  3. 3.

    http://fenix.univ.rzeszow.pl/mkepski/ds/uf.html.

References

  1. Falls (2021). https://www.who.int/news-room/fact-sheets/detail/falls

  2. Abedi, W.M.S., Ibraheem Nadher, D., Sadiq, A.T.: Modified deep learning method for body postures recognition. Int. J. Adv. Sci. Technol. 29, 3830–3841 (2020)

    Google Scholar 

  3. Adhikari, K., Bouchachia, H., Nait-Charif, H.: Activity recognition for indoor fall detection using convolutional neural network. In: 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA). IEEE (2017). https://dx.doi.org/10.23919/MVA.2017.7986795

  4. Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET), pp. 1–6. IEEE (2017)

    Google Scholar 

  5. Bhandari, S., Babar, N., Gupta, P., Shah, N., Pujari, S.: A novel approach for fall detection in home environment. In: 2017 IEEE 6th Global Conference on Consumer Electronics (GCCE) (2017)

    Google Scholar 

  6. Cai, X., Li, S., Liu, X., Han, G.: Vision-based fall detection with multi-task hourglass convolutional auto-encoder. IEEE Access 8, 44493–44502 (2020)

    Article  Google Scholar 

  7. Carreira, J., Zisserman, A.: Quo Vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308 (2017)

    Google Scholar 

  8. Elshwemy, F., Elbasiony, R., Saidahmed, M.: A new approach for thermal vision based fall detection using residual autoencoder. Int. J. Intell. Eng. Syst. 13(2), 250–258 (2020)

    Google Scholar 

  9. Espinosa, R., Ponce, H., Gutierrez, S., Martínez-Villasenor, L., Brieva, J., Moya-Albor, E.: A vision-based approach for fall detection using multiple cameras and convolutional neural networks: a case study using the up-fall detection dataset. Comput. Biol. Med. 115, 103520 (2019)

    Article  Google Scholar 

  10. Feng, Q., Gao, C., Wang, L., Zhao, Y., Song, T., Li, Q.: Spatio-temporal fall event detection in complex scenes using attention guided LSTM. Pattern Recogn. Lett. 130, 242–249 (2020)

    Article  Google Scholar 

  11. Han, Q., et al.: A two-stream approach to fall detection with MobileVGG. IEEE Access 8, 17556–17566 (2020)

    Article  Google Scholar 

  12. Harrou, F., Zerrouki, N., Sun, Y., Houacine, A.: Vision-based fall detection system for improving safety of elderly people. IEEE Instrum. Measur. Mag. 20(6), 49–55 (2017)

    Article  Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  15. Kong, Y., Huang, J., Huang, S., Wei, Z., Wang, S.: Learning spatiotemporal representations for human fall detection in surveillance video. J. Vis. Commun. Image Represent. 59, 215–230 (2019)

    Article  Google Scholar 

  16. Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)

    Article  Google Scholar 

  17. Lin, C.B., Dong, Z., Kuan, W.K., Huang, Y.F.: A framework for fall detection based on OpenPose skeleton and LSTM/GRU models. Appl. Sci. 11(1), 329 (2020)

    Article  Google Scholar 

  18. Lugaresi, C., et al.: MediaPipe: a framework for building perception pipelines. arXiv preprint arXiv:1906.08172 (2019)

  19. Luo, Z., et al.: Computer vision-based descriptive analytics of seniors’ daily activities for long-term health monitoring. Mach. Learn. Healthcare (MLHC) 2, 1 (2018)

    Google Scholar 

  20. Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., Peñafort-Asturiano, C.: UP-fall detection dataset: a multimodal approach. Sensors 19(9), 1988 (2019)

    Article  Google Scholar 

  21. Nunez-Marcos, A., Azkune, G., Arganda-Carreras, I.: Vision-based fall detection with convolutional neural networks. Wirel. Commun. Mob. Comput. 2017, 1–16 (2017)

    Article  Google Scholar 

  22. Pérez-Ros, P., Sanchis-Aguado, M.A., Durá-Gil, J.V., Martínez-Arnau, F.M., Belda-Lois, J.M.: FallSkip device is a useful tool for fall risk assessment in sarcopenic older community people. Int. J. Older People Nurs. (2021)

    Google Scholar 

  23. Sathyanarayana, S., Satzoda, R.K., Sathyanarayana, S., Thambipillai, S.: Vision-based patient monitoring: a comprehensive review of algorithms and technologies. J. Ambient. Intell. Humaniz. Comput. 9(2), 225–251 (2015)

    Article  Google Scholar 

  24. Shoaib, M., Bosch, S., Incel, O., Scholten, H., Havinga, P.: Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors 16(4), 426 (2016)

    Article  Google Scholar 

  25. Silva, F.M., et al.: The sedentary time and physical activity levels on physical fitness in the elderly: a comparative cross sectional study. Int. J. Environ. Res. Public Health 16(19), 3697 (2019)

    Article  Google Scholar 

  26. Suarez, J.J.P., Orillaza, N.S., Naval, P.C.: AFAR: a real-time vision-based activity monitoring and fall detection framework using 1D convolutional neural networks. In: 14th International Conference on Machine Learning and Computing (2022)

    Google Scholar 

  27. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4489–4497 (2015)

    Google Scholar 

  28. Vrigkas, M., Nikou, C., Kakadiaris, I.A.: A review of human activity recognition methods. Front. Robot. AI 2, 28 (2015)

    Article  Google Scholar 

  29. Wang, S., Chen, L., Zhou, Z., Sun, X., Dong, J.: Human fall detection in surveillance video based on PCANet. Multimed. Tools Appl. 75(19), 11603–11613 (2015)

    Article  Google Scholar 

  30. Zerrouki, N., Harrou, F., Houacine, A., Sun, Y.: Fall detection using supervised machine learning algorithms: a comparative study. In: 2016 8th International Conference on Modelling, Identification and Control (ICMIC) (2016)

    Google Scholar 

  31. Zeytinoglu, M., Wroblewski, K.E., Vokes, T.J., Huisingh-Scheetz, M., Hawkley, L.C., Huang, E.S.: Association of loneliness with falls: a study of older US adults using the national social life, health, and aging project. Gerontol. Geriatr. Med. 7, 233372142198921 (2021)

    Google Scholar 

  32. Zhu, N., Zhao, G., Zhang, X., Jin, Z.: Falling motion detection algorithm based on deep learning. IET Image Process. 16, 2845–2853 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jessie James P. Suarez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suarez, J.J.P., Orillaza, N.S., Naval, P.C. (2022). FASENet: A Two-Stream Fall Detection and Activity Monitoring Model Using Pose Keypoints and Squeeze-and-Excitation Networks. In: Nguyen, N.T., Tran, T.K., Tukayev, U., Hong, TP., Trawiński, B., Szczerbicki, E. (eds) Intelligent Information and Database Systems. ACIIDS 2022. Lecture Notes in Computer Science(), vol 13758. Springer, Cham. https://doi.org/10.1007/978-3-031-21967-2_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21967-2_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21966-5

  • Online ISBN: 978-3-031-21967-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics