Skip to main content

Visual Context-Aware Person Fall Detection

  • Conference paper
  • First Online:
Intelligent Decision Technologies (KESIDT 2024)

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 411))

Included in the following conference series:

  • 2 Accesses

Abstract

As the global population ages, the number of fall-related incidents is on the rise. Effective fall detection systems, specifically in the healthcare sector, are crucial to mitigate the risks associated with such events. This study evaluates the role of visual context, including background objects, on the accuracy of fall detection classifiers. We present a segmentation pipeline to semi-automatically separate individuals and objects in images. Well-established models like ResNet-18, EfficientNetV2-S, and Swin-Small are trained and evaluated. During training, pixel-based transformations are applied to segmented objects, and the models are then evaluated on raw images without segmentation. Our findings highlight the significant influence of visual context on fall detection. The application of Gaussian blur to the image background notably improves the performance and generalization capabilities of all models. Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms. However, we demonstrate that object-specific contextual transformations during training effectively mitigate this challenge. Further analysis using saliency maps supports our observation that visual context is crucial in classification tasks. We create both dataset processing API and segmentation pipeline, available at https://github.com/A-NGJ/image-segmentation-cli.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.who.int/news-room/fact-sheets/detail/falls.

  2. 2.

    https://www.lifeline.com/medical-alert-systems/falldetection/.

  3. 3.

    https://support.apple.com/en-us/108896.

  4. 4.

    https://www.medicalguardian.com/.

  5. 5.

    https://labelstud.io/.

  6. 6.

    https://cocodataset.org/#formatdata.

References

  1. Baldewijns, G., Debard, G., Mertes, G., Vanrumste, B., Croonenborghs, T.: Bridging the gap between real-life data and simulated data by providing a highly realistic fall dataset for evaluating camera-based fall detection algorithms. Healthc. Technol. Lett. 3(1), 6–11 (2016). https://doi.org/10.1049/htl.2015.0047

    Article  Google Scholar 

  2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth \(16 \times 16\) Words: Transformers for Image Recognition at Scale (2021)

    Google Scholar 

  3. Eraso, J.C., Muñoz, E., Muñoz, M., Pinto, J.: Dataset Caucafall (2022). https://doi.org/10.17632/7w7fccy7ky.4

  4. Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J.: Yolox: Exceeding Yolo Series in 2021 (2021). arXiv:2107.08430

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition (2015)

    Google Scholar 

  6. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.Y., Dollár, P., Girshick, R.: Segment Anything (2023)

    Google Scholar 

  7. Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)

    Article  MATH  Google Scholar 

  8. Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., et al.: Grounding Dino: Marrying Dino with Grounded Pre-training for Open-Set Object Detection (2023). arXiv:2303.05499

  9. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10,012–10,022 (2021)

    Google Scholar 

  10. Madsen, K., Li, Z., Lauze, F., Nasrollahi, K.: Person fall detection using weakly supervised methods. In: WACV Real World Surveillance Workshop (2024)

    Google Scholar 

  11. Moayeri, M., Pope, P., Balaji, Y., Feizi, S.: A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes (2022)

    Google Scholar 

  12. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  13. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  14. Xiao, K., Engstrom, L., Ilyas, A., Madry, A.: Noise or Signal: The Role of Image Backgrounds in Object Recognition (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zenjie Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nagaj, A., Li, Z., Papadopoulos, D.P., Nasrollahi, K. (2025). Visual Context-Aware Person Fall Detection. In: Czarnowski, I., Howlett, R.J., C. Jain, L. (eds) Intelligent Decision Technologies. KESIDT 2024. Smart Innovation, Systems and Technologies, vol 411. Springer, Singapore. https://doi.org/10.1007/978-981-97-7419-7_19

Download citation

Publish with us

Policies and ethics