Skip to main content
Log in

Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Gait recognition systems have shown great potentials in the field of biometric recognition. Unfortunately, the accuracy of gait recognition is easily affected by a large view angle. To address the problem, this study proposes a feature distribution consistent generative adversarial network (FDC-GAN) to transform gait images from arbitrary views to the target view and then perform identity recognition. Besides reconstruction loss, view classification and identity preserving loss are also introduced to guide the generator to produce gait images of the target views and keep identity information simultaneously. To further encourage the network to generate gait images whose feature distribution can well align the true distribution, we also exploit the recently proposed recurrent cycle consistency loss, which can help to remove the unnoticed and useless content preserved in the generated gait images. The experimental results on datasets CASIA-B and OU-MVLP demonstrate the state-of-the-art performance of our model compared to other GAN-based cross-view gait recognition models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Sharma, R.P., Dey, S.: Fingerprint liveness detection using local quality features. Vis. Comput. 35, 1393–1410 (2019). https://doi.org/10.1007/s00371-018-01618-x

    Article  Google Scholar 

  2. Gad, R., Talha, M., El-Latif, A.A.A., Zorkany, M., EL-SAYED, A., EL-Fishawy, N., Muhammad, G.: Iris recognition using multi-algorithmic approaches for cognitive internet of things (CIoT) framework. Future Gener. Comput. Syst. 89, 178–191 (2018). https://doi.org/10.1016/j.future.2018.06.020

    Article  Google Scholar 

  3. Wang, N., Li, Q., El-Latif, A.A.A., Zhang, T., Niu, X.: Toward accurate localization and high recognition performance for noisy iris images. Multimed. Tools Appl. 71(3), 1411–1430 (2014). https://doi.org/10.1007/s11042-012-1278-7

    Article  Google Scholar 

  4. Wang, N., Li, Q., El-Latif, A.A.A., Peng, J., Niu, X.: An enhanced thermal face recognition method based on multiscale complex fusion for Gabor coefficients. Multimed. Tools Appl.. 72(3), 2339–2358 (2014). https://doi.org/10.1007/s11042-013-1551-4

    Article  Google Scholar 

  5. Ariyanto, G., Nixon, M. S.: Model-based 3d gait biometrics. In: International Joint Conference on Biometrics, pp. 1–7 (2011). https://doi.org/10.1109/IJCB.2011.6117582

  6. Zhang, Z., Seah, H. S., Quah, C. K., Ong, A., Jabbar, K.: A multiple camera system with real-time volume reconstruction for articulated skeleton pose tracking. In: International Conference on Multimedia Modeling, pp. 182–192 (2011). https://doi.org/10.1007/978-3-642-17832-0_18

  7. Goffredo, M., Bouchrika, I., Carter, J.N., Nixon, M.S.: Self calibrating view-invariant gait biometrics. IEEE Trans. Syst. 40(4), 997–1008 (2009). https://doi.org/10.1109/TSMCB.2009.2031091

    Article  Google Scholar 

  8. Lu, J., Tan, Yap-Peng.: Uncorrelated discriminant simplex analysis for view-invariant gait signal computing. Pattern Recognit. Lett. 31(5), 382–393 (2010). https://doi.org/10.1016/j.patrec.2009.11.006

    Article  Google Scholar 

  9. Kusakunniran, W., Wu, Q., Li, H., Zhang, J.: Multiple views gait recognition using view transformation model based on optimized gait energy image. In: IEEE International Conference on Computer Vision, pp. 1058–1064 (2009) https://doi.org/10.1109/ICCVW.2009.5457587

  10. Kusakunniran, W., Wu, Q., Zhang, J., Li, H.: Support vector regression for multi-view gait recognition based on local motion feature selection. In: Computer Vision and Pattern Recognition, pp. 974–981 (2010) https://doi.org/10.1109/CVPR.2010.5540113

  11. Makihara, Y., Sagawa, R., Mukaigawa, Y., Echigo, T., Yagi, Y.: Gait recognition using a view transformation model in the frequency domain. In: European Conference on Computer Vision, pp. 151–163 (2006) https://doi.org/10.1007/11744078_12

  12. Xing, X., Wang, K., Yan, T., Lv, Z.: Complete canonical correlation analysis with application to multi-view gait recognition. Pattern Recognit. 50, 107–117 (2016). https://doi.org/10.1016/j.patcog.2015.08.011

    Article  Google Scholar 

  13. Xu, C., Makihara, Y., Li, X., Yagi, Y., Lu, J.: Cross-view gait recognition using pairwise spatial transformer networks. IEEE Trans. Circuits Syst. Video Technol. 31(1), 260–274 (2021). https://doi.org/10.1109/TCSVT.2020.2975671

    Article  Google Scholar 

  14. Chao, H., He, Y., Zhang, J., Feng, J.: GaitSet: regarding gait as a set for cross-view gait recognition. In: The National Conference on Artificial Intelligence, pp. 8126–8133 (2019). https://doi.org/10.1609/aaai.v33i01.33018126

  15. Song, C., Huang, Y., Huang, Y., Jia, N., Wang, L.: GaitNet: An end-to-end network for gait based human identification. Pattern Recogn. 96, 106988 (2019). https://doi.org/10.1016/j.patcog.2019.106988

    Article  Google Scholar 

  16. Zhang, P., Wu, Q., Xu, J.: VT-GAN: View transformation GAN for gait recognition across views. In: International Joint Conference on Neural Networks, pp. 14–19 (2019) https://doi.org/10.1109/IJCNN.2019.8852258

  17. Zhang, P., Wu, Q., Xu, J.: VN-GAN: Identity-preserved variation normalizing GAN for gait recognition. In: International Joint Conference on Neural Networks, pp. 1–8 (2019) https://doi.org/10.1109/IJCNN.2019.8852401

  18. Jiang, W., Liu, S., Gao, C., Cao, J., He, R., Feng, J., Yan, S.: PSGAN: pose and expression robust spatial-aware GAN for customizable makeup transfer. Comput. Vis. Pattern Recognit. (2020). https://doi.org/10.1109/CVPR42600.2020.00524

    Article  Google Scholar 

  19. Fang, Z., Liu, Z., Liu, T., Hung, C.-C., Xiao, J., Feng, G.: Facial expression GAN for voice-driven face generation. Vis. Comput. (2021). https://doi.org/10.1007/s00371-021-02074-w

    Article  Google Scholar 

  20. Zhan, F., Zhang, C.: Spatial-aware GAN for unsupervised person re- identification. International Conference on Pattern Recognition (2021). https://doi.org/10.1109/ICPR48806.2021.9412465

    Article  Google Scholar 

  21. Liao, R., An, W., Yu, S., Li, Z., Huang, Y.: Dense-view GEIs set: view space covering for gait recognition based on dense-view GAN. International Joint Conference on Biometrics (2020). https://doi.org/10.1109/IJCB48548.2020.9304910

    Article  Google Scholar 

  22. Wang, Y., Song, C., Huang, Y., Wang, Z., Wang, L.: Learning view invariant gait features with two-stream GAN. Neurocomputing 339, 245–254 (2019). https://doi.org/10.1016/j.neucom.2019.02.025

    Article  Google Scholar 

  23. Du, H., Tian, X., Xie, L., Li, H.: Optimizing voice conversion network with cycle consistency loss of speaker identity. IEEE Spoken Language Technology Workshop (2021). https://doi.org/10.1109/SLT48900.2021.9383567

    Article  Google Scholar 

  24. Sanchez, E., Valstar, M.: A recurrent cycle consistency loss for progressive face-to-face synthesis. In: Computer Vision and Pattern Recognition, (2020) https://arxiv.org/abs/2004.07165

  25. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: A unified embedding for face recognition and clustering. In: computer vision and pattern recognition, pp. 815–823 (2015) https://arxiv.org/abs/1503.03832

  26. Hu, M., Wang, Y., Zhang, Z., Little, J.J., Huang, D.: View-invariant discriminative projection for multi-view gait-based human identification. IEEE Trans. Inf. Forensics Secur. 12(8), 2034–2045 (2013). https://doi.org/10.1109/TIFS.2013.2287605

    Article  Google Scholar 

  27. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. In: Neural information processing systems, pp. 2672–2680 (2014) https://arxiv.org/abs/1406.2661

  28. Bi, F., Han, J., Tian, Y., Wang, Y.: SSGAN: generative adversarial networks for the stroke segmentation of calligraphic characters. Vis. Comput. (2021). https://doi.org/10.1007/s00371-021-02133-2

    Article  Google Scholar 

  29. Song, H., Wang, M., Zhang, L., Li, Y., Jiang, Z., Yin, G.: S2RGAN: sonar-image super-resolution based on generative adversarial network. Vis. Comput. (2020). https://doi.org/10.1007/s00371-020-01986-3

    Article  Google Scholar 

  30. Yu, S., Chen, H., Reyes, E. B. G., Poh, N.: GaitGAN: Invariant gait feature extraction using generative adversarial networks. In: Computer Vision and Pattern Recognition Workshops, pp. 532–539 (2017) https://doi.org/10.1109/CVPRW.2017.80

  31. Wang, J., song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning fine-grained image similarity with deep ranking. In: Computer Vision and Pattern Recognition, pp. 1386–1393 (2014) https://doi.org/10.1109/CVPR.2014.180

  32. Deng, W., Zheng, L., Ye, Q., Yang, Y., Jiao, J.: Similarity-preserving image-image domain adaptation for person re-identification. In: Computer Vision and Pattern Recognition, pp. 994–1003 (2018) https://arxiv.org/abs/1811.10551v2

  33. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A. A.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition, pp. 1125–1134 (2017) https://arxiv.org/abs/1611.07004v1

  34. Sanchez, E., Valstar, M.: Triple consistency loss for pairing distributions in GAN-based face synthesis. In: Computer Vision and Pattern Recognition, (2018) https://arxiv.org/abs/1811.03492

  35. Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: International Conference on Pattern Recognition, pp. 441–444 (2006) https://doi.org/10.1109/ICPR.2006.67

  36. Iwama, H., Okumura, M., Makihara, Y., Yagi, Y.: The OU-ISIR gait database comprising the large population dataset and performance evaluation of gait recognition. IEEE Trans. Inf. Forensics Secur. 7(5), 1511–1521 (2012). https://doi.org/10.1109/TIFS.2012

    Article  Google Scholar 

  37. Takemura, N., Makihara, Y., Muramatsu, D., Echigo, T., Yagi, Y.: Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans. Comput. Vis. Appl. 10(4), 1–14 (2018). https://doi.org/10.1186/s41074-018-0039-6

    Article  Google Scholar 

  38. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006). https://doi.org/10.1109/TPAMI.2006.38

    Article  Google Scholar 

  39. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations, (2014) https://arxiv.org/abs/1412.6980v9

  40. He, Y., Zhang, J., Shan, H., Wang, L.: Multi-task GANs for view-specific feature learning in gait recognition. IEEE Trans. Inf. Forensics Secur. 14(1), 102–113 (2019). https://doi.org/10.1109/TIFS.2018.2844819

    Article  Google Scholar 

  41. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Neural Information Processing Systems, pp. 6626–6637 (2017) https://arxiv.org/abs/1706.08500

  42. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition, pp. 1–9 (2015) https://arxiv.org/abs/1409.4842

  43. Zhang, Y., Fu, K., Han, C., Cheng, P.: Identity-and-pose-guided generative adversarial network for face rotation. Neurocomputing 450(3), 33–47 (2021). https://doi.org/10.1016/j.neucom.2021.04.007

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. This work was supported by the National Natural Science Foundation of China (61872004) and also partly supported by National Key R&D Program of China (Grant No. 2020YFC2005300).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yi Xia or Yongliang Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Xia, Y. & Zhang, Y. Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition. Vis Comput 38, 1915–1928 (2022). https://doi.org/10.1007/s00371-021-02254-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02254-8

Keywords

Navigation