Skip to main content
Log in

Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Most of the computer vision applications for human activity recognition exploit the fact that body features calculated from a 3D skeleton increase robustness across persons and can lead to higher performance. However, their success in activity recognition, including falls, depends on the correspondence between the human activities and the used joint/part features. To provide for this correspondence, we experimentally evaluate in this paper skeleton features-based fall detection by comparing fall detection performance for different combinations of skeleton features used in previous related works. We determine the skeleton features that best distinguish fall from non-fall frames, and the best performing classifier. In this endeavor, we followed the classical five steps of supervised machine learning: (1) we collected a learning data composed of 42 fall and 37 non-fall videos from FallFree; (2) we extracted and (3) preprocessed the skeleton data of the training set; (4) we extracted each possible skeleton feature; finally (5) we evaluated all extracted and selected features using two main experiments; one of them based on neighborhood component analysis (NCA). In this evaluation, we show that fall detection based on skeleton features has very encouraging accuracy that varies depending on the used features. More specifically, we recommend the following features: 12 features that resulted from NCA experiment, original and normalized distance from Kinect, and the seven features of the upper body part. These features ranked 1st, 2nd, 4th, and 8th on 22 feature sets, with accuracies 99.5%, 99.4%, 97.8%, and 94.5%, respectively. In addition, random forest is the best performing classifier.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013)

    Article  Google Scholar 

  2. Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 46, 33 (2014)

    Article  Google Scholar 

  3. Microsoft. (2016). JointType enumeration. https://msdn.microsoft.com/en-us/library/microsoft.kinect.jointtype.aspx. Accessed 28 Oct 2016

  4. Kawatsu, C., Li, J., Chung, C. J.: Development of a fall detection system with Microsoft Kinect. In: Kim, J.H., Matson, E., Myung, H., Xu, P. (eds) Robot Intelligence Technology and Applications 2012. Advances in Intelligent Systems and Computing, vol 208. Springer, Berlin, Heidelberg (2013)

    Google Scholar 

  5. Lee, C.K., Lee, V.Y.: Fall detection system based on kinect sensor using novel detection and posture recognition algorithm. In: Inclusive Society: Health and Wellbeing in the Community, and Care at Home. Springer, Berlin, pp. 238–244 (2013)

    Chapter  Google Scholar 

  6. Le, T.-L., Morel, J.: An analysis on human fall detection using skeleton from Microsoft Kinect. In: 2014 IEEE Fifth International Conference on Communications and Electronics (ICCE), pp. 484–489 (2014)

  7. Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Progr. Biomed. 117, 489–501 (2014)

    Article  Google Scholar 

  8. Alzahrani, M.S., Jarraya, S.K., Ali, M.S., Ben-Abdallah, H.: Watchful-Eye: a 3D skeleton-based system for fall detection of physically-disabled cane users. Presented at the 7th EAI International Conference on Wireless Mobile Communication and Healthcare (MobiHealth), Austria, Vienna (2017)

  9. Alzahrani, M.S., Jarraya, S.K., Ali, M.S., Ben-Abdallah, H.: FallFree: Multiple fall scenario dataset of cane users for monitoring applications using kinect. Presented at the 13th International Conference on Signal-Image Technology and Internet-Based Systems (SITIS), Jaipur, India (2017)

  10. Maldonado, C., Ríos, H., Mezura-Montes, E., Marin, A.: Feature selection to detect fallen pose using depth images. In: 2016 International Conference on Electronics, Communications and Computers (CONIELECOMP), pp. 94–100 (2016)

  11. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. ACM SIGKDD Explor. Newsl. 11, 10–18 (2009)

    Article  Google Scholar 

  12. Microsoft. (2016). Coordinate spaces. https://msdn.microsoft.com/en-us/library/hh973078.aspx. Accessed 29 Oct 2016

  13. Microsoft. (2016). Coordinate mapping. https://msdn.microsoft.com/en-us/library/dn785530.aspx. Accessed 29 Oct 2016

  14. Zanfir, M., Leordeanu, M., Sminchisescu, C.: The moving pose: an efficient 3D kinematics descriptor for low-latency action recognition and detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2752–2759 (2013)

  15. Rhemyst and Rymix. (2011/2017). Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition. http://kinectdtw.codeplex.com/. Accessed 2 Jan 2017

  16. Yang, W., Wang, K., Zuo, W.: Neighborhood component feature selection for high-dimensional data. JCP 7, 161–168 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salma Kammoun Jarraya.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alzahrani, M.S., Jarraya, S.K., Ben-Abdallah, H. et al. Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2. SIViP 13, 1431–1439 (2019). https://doi.org/10.1007/s11760-019-01490-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-019-01490-9

Keywords

Navigation