Skip to main content
Log in

A dual-stream fused neural network for fall detection in multi-camera and \(360^{\circ }\) videos

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Globally, human falls are the second leading cause of deaths induced due to unintentional injuries. These fatalities, in most cases, arise due to a lack of timely medication. Therefore, over the years, there has been an immense demand for systems that can quickly send fall-related information to the caretakers so that the medical relief team can reach on time. The traditional schemes for fall detection using wearable sensors such as accelerometers, gyroscopes, etc., are highly intrusive and generate high false positives in real-world conditions. Consequently, the current research directions in this domain have been toward harnessing the availability of low-cost vision sensors and the power of deep learning. To this end, in this work, we present a dual-stream fused neural network (DSFNN) for fall detection in multi-camera and \(360^{\circ }\) video streams. The DSFNN model learns to extract spatial-temporal information using two neural networks, trained independently on the RGB video sequences of fall and non-fall activities and their corresponding single dynamic images. Once trained, the model fuses the prediction scores of the two neural networks using a weighted fusion scheme to obtain the final decision. We assessed the performance of the proposed DSFNN on two multi-camera fall datasets, namely UP-Fall and URFD, and on a new in-house \(360^{\circ }\) video dataset of fall and non-fall activities. The evaluation results in terms of different performance metrics demonstrated the superiority of the proposed fall detection scheme. The framework achieved superior performance and outperformed the previous state-of-the-art fall detection methods. For further research and analysis in the fall detection domain, we will make the source code and the in-house fall dataset available to the research community on request.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Beevi FA, Pedersen CF, Wagner S, Hallerstede S (2014) Lateral fall detection via events in linear prediction residual of acceleration. In: Ambient Intelligence-Software and Applications, Springer, pp 201–208

  2. Organization WH, Ageing WHO, Unit LC (2008) WHO global report on falls prevention in older age. World Health Organization, Geneva

    Google Scholar 

  3. Igual R, Medrano C, Plaza I (2013) Challenges, issues and trends in fall detection systems. Biomed Eng online 12(1):66

    Article  Google Scholar 

  4. Mubashir M, Shao L, Seed L (2013) A survey on fall detection: Principles and approaches. Neurocomputing 100:144–152

    Article  Google Scholar 

  5. Wu F, Zhao H, Zhao Y, Zhong H (2015) Development of a wearable-sensor-based fall detection system. Int. J Telemed Appl.

  6. Gibson RM, Amira A, Ramzan N, Casaseca-de-la Higuera P, Pervez Z (2016) Multiple comparator classifier framework for accelerometer-based fall detection and diagnostic. Appl Soft Comput 39:94–103

    Article  Google Scholar 

  7. Aziz O, Klenk J, Schwickert L, Chiari L, Becker C, Park EJ, Mori G, Robinovitch SN (2017) Validation of accuracy of svm-based fall detection system using real-world fall and non-fall datasets. PLoS one 12(7):e0180318

    Article  Google Scholar 

  8. Mao A, Ma X, He Y, Luo J (2017) Highly portable, sensor-based system for human fall monitoring. Sensors 17(9):2096

    Article  Google Scholar 

  9. Gibson RM, Amira A, Ramzan N, Casaseca-de-la Higuera P, Pervez Z (2017) Matching pursuit-based compressive sensing in a wearable biomedical accelerometer fall diagnosis device. Biomed. Signal Processing Control 33:96–108

    Article  Google Scholar 

  10. Hsieh CY, Liu KC, Huang CN, Chu WC, Chan CT (2017) Novel hierarchical fall detection algorithm using a multiphase fall model. Sensors 17(2):307

    Article  Google Scholar 

  11. Hussain F, Hussain F, Ehatisham-ul Haq M, Azam MA (2019) Activity-aware fall detection and recognition based on wearable sensors. IEEE Sensors J 19(12):4528–4536

    Article  Google Scholar 

  12. Shrivastava R, Pandey M (2020) Real time fall detection in fog computing scenario. Cluster Computing 1–10

  13. Shahzad A, Kim K (2018) Falldroid: an automated smart-phone-based fall detection system using multiple kernel learning. IEEE Transact Industrial Inf 15(1):35–44

    Article  Google Scholar 

  14. Mauldin TR, Canby ME, Metsis V, Ngu AH, Rivera CC (2018) Smartfall: a smartwatch-based fall detection system using deep learning. Sensors 18(10):3363

    Article  Google Scholar 

  15. Casilari E, Oviedo-Jiménez MA (2015) Automatic fall detection system based on the combined use of a smartphone and a smartwatch. PloS one 10(11):e0140929

  16. He J, Bai S, Wang X (2017) An unobtrusive fall detection and alerting system based on kalman filter and bayes network classifier. Sensors 17(6):1393

    Article  Google Scholar 

  17. de la Concepción MÁÁ, Morillo LMS, García JAÁ, González-Abril L (2017) Mobile activity recognition and fall detection system for elderly people using ameva algorithm. Pervasive and Mobile Comput 34:3–13

    Article  Google Scholar 

  18. Gonzalez-Abril L, Cuberos FJ, Velasco F, Ortega JA (2009) Ameva: an autonomous discretization algorithm. Expert Syst with Appl 36(3):5327–5332

    Article  Google Scholar 

  19. Dai J, Bai X, Yang Z, Shen Z, Xuan D (2010) Mobile phone-based pervasive fall detection. Personal Ubiquitous Comput 14(7):633–643

    Article  Google Scholar 

  20. Khan MS, Yu M, Feng P, Wang L, Chambers J (2015) An unsupervised acoustic fall detection system using source separation for sound interference suppression. Signal Proc 110:199–210

    Article  Google Scholar 

  21. Diraco G, Leone A, Siciliano P (2017) A radar-based smart sensor for unobtrusive elderly monitoring in ambient assisted living applications. Biosensors 7(4):55

    Article  Google Scholar 

  22. Espinosa R, Ponce H, Gutiérrez S, Martínez-Villaseñor L, Brieva J, Moya-Albor E (2019) A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the up-fall detection dataset. Computers in biology and medicine 115:103520

    Article  Google Scholar 

  23. Panahi L, Ghods V (2018) Human fall detection using machine vision techniques on rgb-d images. Biomed Signal Proc Contr 44:146–153

    Article  Google Scholar 

  24. Liu J, Xia Y, Tang Z (2020) Privacy-preserving video fall detection using visual shielding information. Visual Comput pp 1–12

  25. Geertsema EE, Visser GH, Viergever MA, Kalitzin SN (2019) Automated remote fall detection using impact features from video and audio. J Biomech 88:25–32

    Article  Google Scholar 

  26. Zhang Q, Ren L, Shi W (2013) Honey: a multimodality fall detection and telecare system. Telemed e-Health 19(5):415–429

    Article  Google Scholar 

  27. Kwolek B, Kepski M (2016) Fuzzy inference-based fall detection using kinect and body-worn accelerometer. Appl Soft Comput 40:305–318

    Article  Google Scholar 

  28. Jahanjoo A, Tahan MN, Rashti MJ (2017) Accurate fall detection using 3-axis accelerometer sensor and mlf algorithm. In: 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA), IEEE, pp 90–95

  29. Howcroft J, Kofman J, Lemaire ED (2017) Feature selection for elderly faller classification based on wearable sensors. J Neuroeng Rehabilit 14(1):47

    Article  Google Scholar 

  30. de Quadros T, Lazzaretti AE, Schneider FK (2018) A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sensors J 18(12):5082–5089

    Article  Google Scholar 

  31. Al-Smadi M, Freihat A, Khalil H, Momani S, Ali Khan R (2017) Numerical multistep approach for solving fractional partial differential equations. Int J Comput Methods 14(03):1750029

    Article  MathSciNet  MATH  Google Scholar 

  32. Al-Smadi M, Arqub OA (2019) Computational algorithm for solving fredholm time-fractional partial integrodifferential equations of dirichlet functions type with error estimates. Appl Mathematics Comput 342:280–294

    MathSciNet  MATH  Google Scholar 

  33. Al-Smadi M, Arqub OA, Hadid S (2020) An attractive analytical technique for coupled system of fractional partial differential equations in shallow water waves with conformable derivative. Commun Theor Phys 72(8):085001

  34. Ajerla D, Mahfuz S, Zulkernine F (2019) A real-time patient monitoring framework for fall detection. Wireless Commun Mobile Comput

  35. Delgado-Escaño R, Castro FM, Cózar JR, Marín-Jiménez MJ, Guil N, Casilari E (2020) A cross-dataset deep learning-based classifier for people fall detection and identification. Comput Methods Programs Biomed 184: 105265

  36. Shi J, Chen D, Wang M (2020) Pre-impact fall detection with cnn-based class activation mapping method. Sensors 20(17):4750

    Article  Google Scholar 

  37. Zhang Z, Ma X, Wu H, Li Y (2018) Fall detection in videos with trajectory-weighted deep-convolutional rank-pooling descriptor. IEEE Access 7:4135–4144

    Article  Google Scholar 

  38. Manekar R, Saurav S, Maiti S, Singh S, Chaudhury S, Kumar R, Chaudhary K et al (2020) Activity recognition for indoor fall detection in 360-degree videos using deep learning techniques. In: Proceedings of 3rd International Conference on Computer Vision and Image Processing, Springer, pp 417–429

  39. Ma C, Shimada A, Uchiyama H, Nagahara H, Ri Taniguchi (2019) Fall detection using optical level anonymous image sensing system. Optics Laser Technol 110:44–61

    Article  Google Scholar 

  40. Lu N, Wu Y, Feng L, Song J (2018) Deep learning for fall detection: three-dimensional cnn combined with lstm on video kinematic data. IEEE J Biomed Health Inf 23(1):314–323

    Article  Google Scholar 

  41. Feng Q, Gao C, Wang L, Zhao Y, Song T, Li Q (2020) Spatio-temporal fall event detection in complex scenes using attention guided lstm. Pattern Recogn Lett 130:242–249

    Article  Google Scholar 

  42. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  43. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  44. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  45. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556

  46. Bilen H, Fernando B, Gavves E, Vedaldi A (2017) Action recognition with dynamic image networks. IEEE Transact Pattern Analy Machine Intell 40(12):2799–2813

    Article  Google Scholar 

  47. Ullah A, Muhammad K, Del Ser J, Baik SW, de Albuquerque VHC (2018) Activity recognition using temporal optical flow convolutional features and multilayer lstm. IEEE Transact Indust Elect 66(12):9692–9702

    Article  Google Scholar 

  48. Singh T, Vishwakarma DK (2021) A deeply coupled convnet for human activity recognition using dynamic and rgb images. Neural Comput Appl 33(1):469–485

    Article  Google Scholar 

  49. Ullah A, Ahmad J, Muhammad K, Sajjad M, Baik SW (2017) Action recognition in video sequences using deep bi-directional lstm with cnn features. IEEE Access 6:1155–1166

    Article  Google Scholar 

  50. Kwolek B, Kepski M (2014) Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput Methods Programs Biomed 117(3):489–501

    Article  Google Scholar 

  51. Martínez-Villaseñor L, Ponce H, Brieva J, Moya-Albor E, Núñez-Martínez J (1988) Peñafort-Asturiano C (2019) Up-fall detection dataset: a multimodal approach. Sensors 19(9)

  52. Alhimale L, Zedan H, Al-Bayatti A (2014) The implementation of an intelligent and video-based fall detection system using a neural network. Appl Soft Comput 18:59–69

    Article  Google Scholar 

  53. Chua JL, Chang YC, Lim WK (2015) A simple vision-based fall detection technique for indoor video surveillance. Signal, Image and Video Processing 9(3):623–633

    Article  Google Scholar 

  54. Mousse MA, Motamed C, Ezin EC (2017) Percentage of human-occupied areas for fall detection from two views. The Visual Comput 33(12):1529–1540

    Article  Google Scholar 

  55. Zerrouki N, Harrou F, Sun Y, Houacine A (2018) Vision-based human action classification using adaptive boosting algorithm. IEEE Sensors J 18(12):5115–5121

    Article  Google Scholar 

  56. Lotfi A, Albawendi S, Powell H, Appiah K, Langensiepen C (2018) Supporting independent living for older adults; employing a visual based fall detection through analysing the motion and shape of the human body. IEEE Access 6:70272–70282

    Article  Google Scholar 

  57. Harrou F, Zerrouki N, Sun Y, Houacine A (2017) Vision-based fall detection system for improving safety of elderly people. IEEE Instrument Measure Magazine 20(6):49–55

    Article  Google Scholar 

  58. Min W, Zou S, Li J (2019) Human fall detection using normalized shape aspect ratio. Multimedia Tools Appl 78(11):14331–14353

    Article  Google Scholar 

  59. De Miguel K, Brunete A, Hernando M, Gambao E (2017) Home camera-based fall detection system for the elderly. Sensors 17(12):2864

    Article  Google Scholar 

  60. Harrou F, Zerrouki N, Sun Y, Houacine A (2019) An integrated vision-based approach for efficient human fall detection in a home environment. IEEE Access 7:114966–114974

    Article  Google Scholar 

  61. Harrou F, Fillatre L, Nikiforov I (2014) Anomaly detection/detectability for a linear model with a bounded nuisance parameter. Annual Rev Cont 38(1):32–44

    Article  Google Scholar 

  62. Tran TH, Le TL, Hoang VN, Vu H (2017) Continuous detection of human fall using multimodal features from kinect sensors in scalable environment. Comput Methods Programs Biomed 146:151–165

    Article  Google Scholar 

  63. Bajones M, Fischinger D, Weiss A, Wolf D, Vincze M, de la Puente P, Körtner T, Weninger M, Papoutsakis K, Michel D et al (2018) Hobbit: providing fall detection and prevention for the elderly in the real world. J Robotics

  64. Wang S, Chen L, Zhou Z, Sun X, Dong J (2016) Human fall detection in surveillance video based on pcanet. Multimedia Tools Appl 75(19):11603–11613

    Article  Google Scholar 

  65. Núñez-Marcos A, Azkune G, Arganda-Carreras I (2017) Vision-based fall detection with convolutional neural networks. Wireless Commun Mobile Comput

  66. Tran D, Bourdev L, Fergus R, Torresani L, Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 4489–4497

  67. Khraief C, Benzarti F, Amiri H (2020) Elderly fall detection based on multi-stream deep convolutional networks. Multimedia Tools Appl 1–24

  68. Li H, Li C, Ding Y (2020) Fall detection based on fused saliency maps. Multimedia Tools Appl 1–18

  69. Ricciuti M, Spinsante S, Gambi E (2018) Accurate fall detection in a top view privacy preserving configuration. Sensors 18(6):1754

    Article  Google Scholar 

  70. Kong X, Chen L, Wang Z, Chen Y, Meng L, Tomiyama H (2019) Robust self-adaptation fall-detection system based on camera height. Sensors 19(17):3768

    Article  Google Scholar 

  71. Boudouane I, Makhlouf A, Harkat MA, Hammouche MZ, Saadia N, Cherif AR (2019) Fall detection system with portable camera. J Ambient Intell Humanized Comput 1–13

  72. Saurav S, Kiran TM, Reddy BSK, Srivastav KS, Singh S, Saini R (2018) Dynamic image networks for human fall detection in 360-degree videos. In: Workshop on Computer Vision Applications, Springer, pp 65–78

  73. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. International J Comput vision 115(3):211–252

    Article  MathSciNet  Google Scholar 

  74. Greff K, Srivastava RK, Koutník J, Steunebrink BR, Schmidhuber J (2016) Lstm: a search space odyssey. IEEE Transact Neural Netw Learn Syst 28(10):2222–2232

    Article  MathSciNet  Google Scholar 

  75. Bilen H, Fernando B, Gavves E, Vedaldi A, Gould S (2016) Dynamic image networks for action recognition. pp 3034–3042

  76. Singh R, Khurana R, Kushwaha AKS, Srivastava R (2020) Combining cnn streams of dynamic image and depth data for action recognition. Multimedia Syst 1–10

  77. Verma M, Vipparthi SK, Singh G, Murala S (2019) Learnet: dynamic imaging network for micro expression recognition. IEEE Transact Image Proc 29:1618–1627

    Article  MathSciNet  Google Scholar 

  78. Fernando B, Gavves E, Oramas J, Ghodrati A, Tuytelaars T (2016) Rank pooling for action recognition. IEEE Transactpattern Analy Machine Intell 39(4):773–787

    Article  Google Scholar 

  79. Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Statist Comput 14(3):199–222

    Article  MathSciNet  Google Scholar 

  80. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, Ieee, pp 248–255

  81. Zhao L, Wang Z, Zhang G, Qi Y, Wang X (2018) Eye state recognition based on deep integrated neural network and transfer learning. Multimedia Tools Appl 77(15):19415–19438

    Article  Google Scholar 

  82. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826

  83. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1492–1500

  84. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520

  85. Ullah W, Ullah A, Haq IU, Muhammad K, Sajjad M, Baik SW (2021) Cnn features with bi-directional lstm for real-time anomaly detection in surveillance networks. Multimedia Tools Appl 80(11):16979–16995

    Article  Google Scholar 

  86. Soomro K, Zamir AR, Shah M (2012) Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:12120402

  87. Yoshikawa Y, Lin J, Takeuchi A (2018) Stair actions: a video dataset of everyday home actions. arXiv preprint arXiv:180404326

  88. Dosovitskiy A, Fischer P, Ilg E, Hausser P, Hazirbas C, Golkov V, Van Der Smagt P, Cremers D, Brox T (2015) Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 2758–2766

  89. Su YC, Grauman K (2017) Learning spherical convolution for fast features from \(360^{circ }\) imagery. NIPS 2:5

    Google Scholar 

  90. Zhang L, Zhu G, Shen P, Song J, Afaq Shah S, Bennamoun M (2017) Learning spherical convolution for fast features from \(360^{\circ }\) imagery. In: NIPS, vol 2, p 5

  91. Lee H, Kim J, Yang D, Kim JH (2017) Embedded real-time fall detection using deep learning for elderly care. arXiv preprint arXiv:171111200

Download references

Acknowledgements

The authors would like to thank the director, CSIR-CEERI, Pilani for supporting and encouraging research activities at CSIR-CEERI, Pilani. Constant motivation by the group head, cognitive computing group, CSIR-CEERI is also acknowledged. The authors would also like to thank all the volunteers for their active participation in the database preparation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumeet Saurav.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saurav, S., Saini, R. & Singh, S. A dual-stream fused neural network for fall detection in multi-camera and \(360^{\circ }\) videos. Neural Comput & Applic 34, 1455–1482 (2022). https://doi.org/10.1007/s00521-021-06495-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06495-5

Keywords

Navigation