Skip to main content

Gait Recognition from a Single Image Using a Phase-Aware Gait Cycle Reconstruction Network

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12364))

Included in the following conference series:

Abstract

We propose a method of gait recognition just from a single image for the first time, which enables latency-free gait recognition. To mitigate large intra-subject variations caused by a phase (gait pose) difference between a matching pair of input single images, we first reconstruct full gait cycles of image sequences from the single images using an auto-encoder framework, and then feed them into a state-of-the-art gait recognition network for matching. Specifically, a phase estimation network is introduced for the input single image, and the gait cycle reconstruction network exploits the estimated phase to mitigate the dependence of an encoded feature on the phase of that single image. This is called phase-aware gait cycle reconstructor (PA-GCR). In the training phase, the PA-GCR and recognition network are simultaneously optimized to achieve a good trade-off between reconstruction and recognition accuracies. Experiments on three gait datasets demonstrate the significant performance improvement of this method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The results were obtained by using their model on our test set (selected single image from a sequence).

  2. 2.

    Reconstruction results for cross-dataset testing are shown in the supplementary material.

  3. 3.

    A mean squared L2 distance of 0.02 is equivalent to approximately 9\(^\circ \) of the circumference, i.e., less than one phase for a gait cycle containing 25 phases.

References

  1. Akae, N., Makihara, Y., Yagi, Y.: Gait recognition using periodic temporal super resolution for low frame-rate videos. In: Proceedings of the International Joint Conference on Biometrics (IJCB2011), Washington D.C., USA, pp. 1–7, October 2011

    Google Scholar 

  2. Akae, N., Mansur, A., Makihara, Y., Yagi, Y.: Video from nearly still: an application to low frame-rate gait recognition. In: Proceedings of the 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR2012), Providence, RI, USA, pp. 1537–1543, June 2012

    Google Scholar 

  3. Al-Huseiny, M.S., Mahmoodi, S., Nixon, M.S.: Gait learning-based regenerative model: a level set approach. In: The 20th International Conference on Pattern Recognition, Istanbul, Turkey, pp. 2644–2647, August 2010

    Google Scholar 

  4. Babaee, M., Li, L., Rigoll, G.: Person identification from partial gait cycle using fully convolutional neural networks. Neurocomputing 338, 116–125 (2019)

    Article  Google Scholar 

  5. Bashir, K., Xiang, T., Gong, S.: Cross view gait recognition using correlation strength. In: BMVC (2010)

    Google Scholar 

  6. Bouchrika, I., Goffredo, M., Carter, J., Nixon, M.: On using gait in forensic biometrics. J. Forensic Sci. 56(4), 882–889 (2011)

    Article  Google Scholar 

  7. Chao, H., He, Y., Zhang, J., Feng, J.: GaitSet: regarding gait as a set for cross-view gait recognition. In: Proceedings of the 33th AAAI Conference on Artificial Intelligence (AAAI 2019) (2019)

    Google Scholar 

  8. El-Alfy, H., Xu, C., Makihara, Y., Muramatsu, D., Yagi, Y.: A geometric view transformation model using free-form deformation for cross-view gait recognition. In: Proceedings of the 4th Asian Conference on Pattern Recognition (ACPR 2017). IEEE, November 2017

    Google Scholar 

  9. Gao, R., Xiong, B., Grauman, K.: Im2Flow: motion hallucination from static images for action recognition. In: CVPR (2018)

    Google Scholar 

  10. Guan, Y., Li, C., Roli, F.: On reducing the effect of covariate factors in gait recognition: a classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 37(7), 1521–1528 (2015)

    Article  Google Scholar 

  11. Guan, Y., Li, C.T.: A robust speed-invariant gait recognition system for walker and runner identification. In: Proceedings of the 6th IAPR International Conference on Biometrics, pp. 1–8 (2013)

    Google Scholar 

  12. Guan, Y., Li, C.T., Choudhury, S.: Robust gait recognition from extremely low frame-rate videos. In: 2013 International Workshop on Biometrics and Forensics (IWBF), pp. 1–4, April 2013. https://doi.org/10.1109/IWBF.2013.6547319

  13. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006)

    Article  Google Scholar 

  14. He, Y., Zhang, J., Shan, H., Wang, L.: Multi-task GANs for view-specific feature learning in gait recognition. IEEE Trans. Inf. Forensics Secur. 14(1), 102–113 (2019). https://doi.org/10.1109/TIFS.2018.2844819

    Article  Google Scholar 

  15. Hermans, A., Beyer, L., Leibe, B.: In defense of the triplet loss for person re-identification. CoRR abs/1703.07737 (2017). http://arxiv.org/abs/1703.07737

  16. Horst, F., Lapuschkin, S., Samek, W., Müller, K., Schöllhorn, W.: Explaining the unique nature of individual gait patterns with deep learning. Sci. Rep. 9, 2391 (2019). https://doi.org/10.1038/s41598-019-38748-8

  17. Hossain, M.A., Makihara, Y., Wang, J., Yagi, Y.: Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control. Pattern Recogn. 43(6), 2281–2291 (2010)

    Article  Google Scholar 

  18. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015). http://arxiv.org/abs/1502.03167

  19. Iwama, H., Muramatsu, D., Makihara, Y., Yagi, Y.: Gait verification system for criminal investigation. IPSJ Trans. Comput. Vis. Appl. 5, 163–175 (2013)

    Article  Google Scholar 

  20. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014)

  21. Kourtzi, Z., Kanwisher, N.: Activation in human MT/MST by static images with implied motion. J. Cogn. Neurosci. 12, 48–55 (2000). https://doi.org/10.1162/08989290051137594

  22. Kusakunniran, W., Wu, Q., Zhang, J., Li, H.: Support vector regression for multi-view gait recognition based on local motion feature selection. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2010, San Francisco, CA, USA, pp. 1–8, June 2010

    Google Scholar 

  23. Li, X., Makihara, Y., Xu, C., Yagi, Y., Ren, M.: Joint intensity transformer network for gait recognition robust against clothing and carrying status. IEEE Trans. Inf. Forensics Secur. 14(12), 3102–3115 (2019)

    Article  Google Scholar 

  24. Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: CVPR, July 2017

    Google Scholar 

  25. Lynnerup, N., Larsen, P.: Gait as evidence. IET Biometrics 3(2), 47–54 (2014). https://doi.org/10.1049/iet-bmt.2013.0090

  26. Makihara, Y., et al.: The OU-ISIR gait database comprising the treadmill dataset. IPSJ Trans. Comput. Vis. Appl. 4, 53–62 (2012)

    Article  Google Scholar 

  27. Makihara, Y., Mori, A., Yagi, Y.: Temporal super resolution from a single quasi-periodic image sequence based on phase registration. In: Proceedings of the 10th Asian Conference on Computer Vision, Queenstown, New Zealand, pp. 107–120, November 2010

    Google Scholar 

  28. Makihara, Y., Sagawa, R., Mukaigawa, Y., Echigo, T., Yagi, Y.: Gait recognition using a view transformation model in the frequency domain. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 151–163. Springer, Heidelberg (2006). https://doi.org/10.1007/11744078_12

    Chapter  Google Scholar 

  29. Makihara, Y., Suzuki, A., Muramatsu, D., Li, X., Yagi, Y.: Joint intensity and spatial metric learning for robust gait recognition. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6786–6796, July 2017. https://doi.org/10.1109/CVPR.2017.718

  30. Makihara, Y., Yagi, Y.: Silhouette extraction based on iterative spatio-temporal local color transformation and graph-cut segmentation. In: Proceedings of the 19th International Conference on Pattern Recognition, Tampa, Florida, USA, December 2008

    Google Scholar 

  31. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, Omnipress, USA, pp. 807–814 (2010). http://dl.acm.org/citation.cfm?id=3104322.3104425

  32. Phillips, P., Moon, H., Rizvi, S., Rauss, P.: The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)

    Article  Google Scholar 

  33. Pintea, S.L., Gemert, J.C., Smeulders, A.W.M.: Déjàvu: motion prediction in static images. In: ECCV (2014)

    Google Scholar 

  34. Prismall, S.P., Nixon, M.S., Carter, J.N.: Novel temporal views of moving objects for gait biometrics. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688, pp. 725–733. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-44887-X_84

    Chapter  Google Scholar 

  35. Sarkar, S., Phillips, P.J., Liu, Z., Vega, I.R., Grother, P., Bowyer, K.W.: The humanID gait challenge problem: data sets, performance, and analysis. IEEE Trans. Pattern Anal. Mach. Intell. 27(2), 162–177 (2005). https://doi.org/10.1109/TPAMI.2005.39

    Article  Google Scholar 

  36. Sederberg, T.W., Parry, S.R.: Free-form deformation of solid geometric models. SIGGRAPH Comput. Graph. 20(4), 151–160 (1986). https://doi.org/10.1145/15886.15903

  37. Shiraga, K., Makihara, Y., Muramatsu, D., Echigo, T., Yagi, Y.: GeiNet: view-invariant gait recognition using a convolutional neural network. In: 2016 International Conference on Biometrics (ICB), pp. 1–8 (2016)

    Google Scholar 

  38. Takemura, N., Makihara, Y., Muramatsu, D., Echigo, T., Yagi, Y.: On input/output architectures for convolutional neural network-based cross-view gait recognition. IEEE Trans. Circ. Syst. Video Technol., 1 (2018). https://doi.org/10.1109/TCSVT.2017.2760835

  39. Takemura, N., Makihara, Y., Muramatsu, D., Echigo, T., Yagi, Y.: Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans. Comput. Vis. Appl. 10(1), 1–14 (2018). https://doi.org/10.1186/s41074-018-0039-6

    Article  Google Scholar 

  40. Wolf, T., Babaee, M., Rigoll, G.: Multi-view gait recognition using 3D convolutional neural networks. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 4165–4169 (2016)

    Google Scholar 

  41. Wu, Z., Huang, Y., Wang, L.: Learning representative deep features for image set analysis. IEEE Trans. Multimedia 17(11), 1960–1968 (2015). https://doi.org/10.1109/TMM.2015.2477681

    Article  Google Scholar 

  42. Wu, Z., Huang, Y., Wang, L., Wang, X., Tan, T.: A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 209–226 (2017)

    Article  Google Scholar 

  43. Xu, C., Makihara, Y., Li, X., Yagi, Y., Lu, J.: Speed Invariance vs. stability: cross-speed gait recognition using single-support gait energy image. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10112, pp. 52–67. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54184-6_4

    Chapter  Google Scholar 

  44. Xu, C., Makihara, Y., Yagi, Y., Lu, J.: Gait-based age progression/regression: a baseline and performance evaluation by age group classification and cross-age gait identification. Mach. Vis. Appl. 30(4), 629–644 (2019). https://doi.org/10.1007/s00138-019-01015-x

  45. Yu, S., Chen, H., Reyes, E.B.G., Poh, N.: GaitGAN: invariant gait feature extraction using generative adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 532–539, July 2017. https://doi.org/10.1109/CVPRW.2017.80

  46. Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China, vol. 4, pp. 441–444, August 2006

    Google Scholar 

  47. Zhang, C., Liu, W., Ma, H., Fu, H.: Siamese neural network based gait recognition for human identification. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2832–2836 (2016)

    Google Scholar 

  48. Zhang, K., Luo, W., Ma, L., Liu, W., Li, H.: Learning joint gait representation via quintuplet loss minimization. In: 2019 Conference on Computer Vision and Pattern Recognition (CVPR 2019) (2019)

    Google Scholar 

Download references

Acknowledgment

This work was supported by JSPS KAKENHI Grant No. JP18H04115, JP19H05692, and JP20H00607, Jiangsu Provincial Science and Technology Support Program (No. BE2014714), the 111 Project (No. B13022), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, C., Makihara, Y., Li, X., Yagi, Y., Lu, J. (2020). Gait Recognition from a Single Image Using a Phase-Aware Gait Cycle Reconstruction Network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12364. Springer, Cham. https://doi.org/10.1007/978-3-030-58529-7_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58529-7_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58528-0

  • Online ISBN: 978-3-030-58529-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics