Skip to main content
Log in

A multi-modal dataset for gait recognition under occlusion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Gait recognition aims to identify people by the way they walk. Currently available gait recognition datasets mainly contain single-person gait data in relatively simple walking conditions, which limits research of robust gait recognition methods. In this paper, OG RGB+D dataset is presented to cope with this crucial limitation of other gait datasets. It includes the common walking conditions under occlusion in daily life, that is, those daily walking conditions in which people’s normal walking patterns are occluded, including self-occlusion caused by views, occlusion caused by clothing or objects, and mutual occlusion between people. The dataset provides multi-modal data to support different types of methods, collected by multiple Azure Kinect DK sensors using synchronous data acquisition system (Multi-Kinect SDAS). Moreover, we propose a model-based gait recognition method SkeletonGait for gait recognition in walking conditions under occlusion, which learns discriminative gait features from human dual skeleton model composed of skeleton and anthropometric features through a siamese Spatio-Temporal Graph Convolutional Network (siamese ST-GCN). The experimental results show that SkeletonGait surpasses state-of-the-art methods in the case of severe occlusion. We believe that the introduction of our dataset will enable the community to apply, adapt, and develop various robust gait recognition methods. The dataset will be available at https://github.com/cvNXE/OG-RGB-D-gait-dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Sengupta S, Jayaram V, Curless B, Seitz S M, Kemelmacher-Shlizerman I (2020) Background matting: The world is your green screen. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 2291–2300

  2. Zhang S-H, Li R, Dong X, Rosin P, Cai Z, Han X, Yang D, Huang H, Hu S-M (2019) Pose2seg: Detection free human instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 889–898

  3. Rogez G, Weinzaepfel P, Schmid C (2017) Lcr-net: Localization-classification-regression for human pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3433–3441

  4. Kocabas M, Athanasiou N, Black M J (2020) Vibe: Video inference for human body pose and shape estimation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

  5. Mehta D, Rhodin H, Casas D, Fua P, Sotnychenko O, Xu W, Theobalt C (2017) Monocular 3d human pose estimation in the wild using improved cnn supervision. In: 2017 international conference on 3D vision (3DV). IEEE, pp 506–516

  6. Mehta D, Sotnychenko O, Mueller F, Xu W, Sridhar S, Pons-Moll G, Theobalt C (2018) Single-shot multi-person 3d pose estimation from monocular rgb. In: 2018 International Conference on 3D Vision (3DV). IEEE, pp 120–130

  7. Moon G, Chang J Y, Lee K M (2019) Camera distance-aware top-down approach for 3d multi-person pose estimation from a single rgb image. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 10133–10142

  8. Yang K, Dou Y, Lv S, Zhang F, Lv Q (2016) Relative distance features for gait recognition with kinect. J Vis Commun Image R 39:209–217

    Article  Google Scholar 

  9. Yu S, Tan D, Tan T (2006) A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: 18th International Conference on Pattern Recognition (ICPR’06), vol 4. IEEE, pp 441–444

  10. Wang T, Li C, Wu C, Zhao C, Sun J, Peng H, Hu X, Hu B (2020) A gait assessment framework for depression detection using kinect sensors. IEEE Sens J 21(3):3260–3270

    Article  Google Scholar 

  11. Bei S, Zhen Z, Xing Z, Taocheng L, Qin L (2018) Movement disorder detection via adaptively fused gait analysis based on kinect sensors. IEEE Sens J 18(17):7305–7314

    Article  Google Scholar 

  12. Brown Kramer J, Sabalka L, Rush B, Jones K, Nolte T (2020) Automated depth video monitoring for fall reduction: A case study. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp 294–295

  13. Andersson V O, Araujo R M (2015) Person identification using anthropometric and gait data from kinect sensor. In: Twenty-Ninth AAAI Conference on Artificial Intelligence, pp 425–43

  14. Baisware A, Sayankar B, Hood S (2019) Review on recent advances in human action recognition in video data. In: 2019 9th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-19). IEEE, pp 1–5

  15. Borràs R, Lapedriza A, Igual L (2012) Depth information in human gait analysis: an experimental study on gender recognition. In: International Conference Image Analysis and Recognition. Springer, pp 98–105

  16. Sivapalan S, Chen D, Denman S, Sridharan S, Fookes C (2011) Gait energy volumes and frontal gait recognition using depth images. In: 2011 International Joint Conference on Biometrics (IJCB). IEEE, pp 1–6

  17. Hofmann M, Geiger J, Bachmann S, Schuller B, Rigoll G (2014) The tum gait from audio, image and depth (gaid) database: Multimodal recognition of subjects and traits. J Vis Commun Image R 25 (1):195–206

    Article  Google Scholar 

  18. Wang Y, Sun J, Li J, Zhao D (2016) Gait recognition based on 3d skeleton joints captured by kinect. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE, pp 3151–3155

  19. Makihara Y, Mannami H, Tsuji A, Hossain M A, Sugiura K, Mori A, Yagi Y (2012) The ou-isir gait database comprising the treadmill dataset. IPSJ Trans Comput Vis Appl 4:53–62

    Article  Google Scholar 

  20. Weizhi A, Shiqi Y, Yasushi M, Xinhui W, Chi X, Yang Y, Rijun L, Yasushi Y (2020) Performance evaluation of model-based gait on multi-view very large population database with pose sequences. IEEE Trans. on Biometrics, Behavior, and Identity Science

  21. Hofmann M, Sural S, Rigoll G (2011) Gait recognition in the presence of occlusion: A new dataset and baseline algorithms

  22. Wu Z, Huang Y, Wang L, Wang X, Tan T (2016) A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans Pattern Anal Mach Intell 39(2):209–226

    Article  Google Scholar 

  23. Yu S, Chen H, Wang Q, Shen L, Huang Y (2017) Invariant feature extraction for gait recognition using only one uniform model. Neurocomputing 239:81–93

    Article  Google Scholar 

  24. Yu S, Chen H, Garcia Reyes E B, Poh N (2017) Gaitgan: Invariant gait feature extraction using generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 30–37

  25. He Y, Zhang J, Shan H, Wang L (2018) Multi-task gans for view-specific feature learning in gait recognition. IEEE Trans Inf Forensic Secur 14(1):102–113

    Article  Google Scholar 

  26. Yu S, Liao R, An W, Chen H, B E, Huang Y, Poh N (2019) Gaitganv2: invariant gait feature extraction using generative adversarial networks. Pattern Recogn:179–189

  27. Chao H, He Y, Zhang J, Feng J (2019) Gaitset: Regarding gait as a set for cross-view gait recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 33, pp 8126–8133

  28. Fan C, Peng Y, Cao C, Liu X, Hou S, Chi J, Huang Y, Li Q, He Z (2020) Gaitpart: Temporal part-based model for gait recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14225–14233

  29. Xu W (2021) Graph-optimized coupled discriminant projections for cross-view gait recognition. Appl Intell:1–13

  30. Zhao L, Guo L, Zhang R, Xie X, Ye X (2021) mmgaitset: multimodal based gait recognition for countering carrying and clothing changes. Appl Intell:1–14

  31. Portillo-Portillo J, Leyva R, Sanchez V, Sanchez-Perez G, Perez-Meana H, Olivares-Mercado J, Toscano-Medina K, Nakano-Miyatake M (2018) A view-invariant gait recognition algorithm based on a joint-direct linear discriminant analysis. Appl Intell 48(5):1200–1217

    Google Scholar 

  32. Liao R, Cao C, Garcia E B, Yu S, Huang Y (2017) Pose-based temporal-spatial network (ptsn) for gait recognition with carrying and clothing variations. In: Chinese Conference on Biometric Recognition. Springer, pp 474–483

  33. An W, Liao R, Yu S, Huang Y, Yuen P C (2018) Improving gait recognition with 3d pose estimation. In: Chinese Conference on Biometric Recognition. Springer, pp 137–147

  34. Liao R, Yu S, An W, Huang Y (2020) A model-based gait recognition method with body pose and human prior knowledge. Lect Notes Comput Sc 98:107069

    Google Scholar 

  35. Li N, Zhao X, Ma C (2020) Jointsgait: Gait recognition based on graph convolutional networks and joints relationship pyramid mapping. arXiv:2005.08625

  36. Sendhil Kumara S, Chattopadhyaya P, Wang L (2021) Bgaitr-net: Occluded gait sequence reconstructionwith temporally constrained model for gait recognition. arXiv e-prints, pp arXiv–2110

  37. Babaee M, Li L, Rigoll G (2018) Gait recognition from incomplete gait cycle. In: 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, pp 768–772

  38. Chen C, Liang J, Zhao H, Hu H, Tian J (2009) Frame difference energy image for gait recognition with incomplete silhouettes. Pattern Recogn Lett 30(11):977–984

    Article  Google Scholar 

  39. Uddin M Z, Muramatsu D, Takemura N, Ahad M A R, Yagi Y (2019) Spatio-temporal silhouette sequence reconstruction for gait recognition against occlusion. IPSJ Trans Comput Vis Appl 11(1):1–18

    Google Scholar 

  40. Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. arXiv:1801.07455

  41. Chen X, Weng J, Lu W, Xu J (2017) Multi-gait recognition based on attribute discovery. IEEE Trans Pattern Anal Mach Intell 40(7):1697–1710

    Article  Google Scholar 

  42. Kozlowski L T, Cutting J E (1978) Recognizing the gender of walkers from point-lights mounted on ankles: some second thoughts. Perception & Psychophysics

  43. Hermans A, Beyer L, Leibe B (2017) In defense of the triplet loss for person re-identification. arXiv:1703.07737

  44. Deng J, Guo J, Xue N, Zafeiriou S (2019) Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4690–4699

  45. Teepe T, Khan A, Gilg J, Herzog F, Hörmann S, Rigoll G (2021) Gaitgraph: Graph convolutional network for skeleton-based gait recognition. arXiv:2101.11228

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China [grant numbers 61871326].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinbo Zhao.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(MP4 18.0 MB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, N., Zhao, X. A multi-modal dataset for gait recognition under occlusion. Appl Intell 53, 1517–1534 (2023). https://doi.org/10.1007/s10489-022-03474-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03474-8

Keywords

Navigation