Skip to main content
Log in

Multi-nonlinear multi-view locality-preserving projection with similarity learning for random cross-view gait recognition

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

View variation is one of the greatest challenges in the field of gait recognition. Subspace learning approaches are designed to solve this issue by projecting cross-view features into a common subspace before recognition. However, similarity measures are data-dependent, which results in low accuracy when cross-view gait samples are randomly arranged. Inspired by the recent developments of data-driven similarity learning and multi-nonlinear projection, we propose a new unsupervised projection approach, called multi-nonlinear multi-view locality-preserving projections with similarity learning (M2LPP-SL). The similarity information among cross-view samples can be learned adaptively in our M2LPP-SL. Besides, the complex nonlinear structure of original data can be well preserved through multiple explicit nonlinear projection functions. Nevertheless, its performance is largely affected by the choice of nonlinear projection functions. Considering the excellent ability of kernel trick for capturing nonlinear structure information, we further extend M2LPP-SL into kernel space, and propose its multiple kernel version MKMLPP-SL. As a result, our approaches can capture linear and nonlinear structure more precisely, and also learn similarity information hidden in the multi-view gait dataset. The proposed models can be solved efficiently by alternating direction optimization method. Extensive experimental results over various view combinations on the multi-view gait database CASIA-B have demonstrated the superiority of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Sugandhi, K., Wahid, F.F., Raju, G.: Feature extraction methods for human gait recognition—a survey. Adv. Comput. Data Sci. 721, 377–385 (2017)

    Article  Google Scholar 

  2. Nie, F., Cai, G., Li, J., et al.: Auto-weighted multi-view learning for image clustering and semi-supervised classification. IEEE Trans. Image Process. 3(27), 1501–1511 (2017)

    MathSciNet  MATH  Google Scholar 

  3. Wu, X., Li, Q., Xu, L., Chen, K., Yao, L.: Multi-feature kernel discriminant dictionary learning for face recognition. Pattern Recogn. 66, 404–411 (2017)

    Article  Google Scholar 

  4. Pan, H., He, J., Ling, Y., et al.: Graph regularized multiview marginal discriminant projection. J. Vis. Commun. Image Represent. 57, 12–22 (2018)

    Article  Google Scholar 

  5. Wan, C.S., Wang, L., Phoha, V.V.: A survey on gait recognition. ACM Comput. Surv. 51(5), 89 (2018)

    Google Scholar 

  6. Adeli-Mosabbeb, E., Fathy, M., Zargari, F.: Model-based human gait tracking, 3D reconstruction and recognition in uncalibrated monocular video. Imaging Sci. J. 60(1), 9–28 (2012)

    Article  Google Scholar 

  7. Sun, J., Wang, Y., Li, J., et al.: View-invariant gait recognition based on kinect skeleton feature. Multimedia Tools Appl. 77(19), 24909–24935 (2018)

    Article  Google Scholar 

  8. Tang, J., Luo, J., Tjahjadi, T., et al.: Robust arbitrary-view gait recognition based on 3D partial similarity matching. IEEE Trans. Image Process. 26(1), 7–22 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. Jean, F., Bergevin, R., Albu, A.B.: Computing and evaluating view-normalized body part trajectories. Image Vis. Comput. 27(9), 1272–1284 (2009)

    Article  MATH  Google Scholar 

  10. Kusakunniran, W., Wu, Q., Zhang, J., et al.: A new view-invariant feature for cross-view gait recognition. IEEE Trans. Inf. Forensics Secur. 8(10), 1642–1653 (2013)

    Article  Google Scholar 

  11. Tafazzoli, F., Safabakhsh, R.: Model-based human gait recognition using leg and arm movements. Eng. Appl. Artif. Intell. 23(8), 1237–1246 (2010)

    Article  Google Scholar 

  12. Wang, H., Fan, Y.Y., Fang, B.F., et al.: Generalized linear discriminant analysis based on euclidean norm for gait recognition. Int. J. Mach. Learn. Cybern. 9(4), 569–576 (2018)

    Article  Google Scholar 

  13. Xing, X., Wang, K., Yan, T., et al.: Complete canonical correlation analysis with application to multi-view gait recognition. Pattern Recogn. 50, 107–117 (2015)

    Article  Google Scholar 

  14. Xu, W., Zhu, C., Wang, Z.: Multiview max-margin subspace learning for cross-view gait recognition. Pattern Recogn. Lett. 107, 75–82 (2018)

    Article  Google Scholar 

  15. Wu, Z., Huang, Y., Wang, L., et al.: A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 209–226 (2017)

    Article  Google Scholar 

  16. Li, B., Chang, H., Shan, S.G., et al.: Coupled metric learning for face recognition with degraded images. Adv. Mach. Learn. Proc. 5828, 220–233 (2009)

    Google Scholar 

  17. Xu, W., Luo, C., Ji, A., et al.: Coupled locality preserving projections for cross-view gait recognition. Neurocomputing 224, 37–44 (2017)

    Article  Google Scholar 

  18. He, X.F., Niyogi, P.: Locality preserving projections. Adv. Neural Inf. Process. Syst. 16, 153–160 (2004)

    Google Scholar 

  19. Bashir, K., Xiang, T., Gong, S.: Cross-view gait recognition using correlation strength. Proc. Br. Mach. Vis. Conf. 109, 1–11 (2010)

    Google Scholar 

  20. Hardoon, D.R., Szedmak, S., Shawe-Taylor, J.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004)

    Article  MATH  Google Scholar 

  21. Ben, X.Y., Meng, W.X., Yan, R., et al.: An improved biometrics technique based on metric learning approach. Neurocomputing 97(1), 44–51 (2012)

    Article  Google Scholar 

  22. Ben, X.Y., Meng, W.X., Yan, R., et al.: Kernel coupled distance metric learning for gait recognition and face recognition. Neurocomputing 120(10), 577–589 (2013)

    Article  Google Scholar 

  23. Huang, G.B.: An insight into extreme learning machines: random neurons, random features and kernels. Cognit. Comput. 6(3), 376–390 (2014)

    Article  MathSciNet  Google Scholar 

  24. Wang, Q., Dou, Y., Liu, X.W., et al.: Multi-view clustering with extreme learning machine. Neurocomputing 214, 483–494 (2016)

    Article  Google Scholar 

  25. Zhao, Z., Feng, G., Zhu, J., et al.: Manifold learning: dimensionality reduction and high dimensional data reconstruction via dictionary learning. Neurocomputing 216, 268–285 (2016)

    Article  Google Scholar 

  26. Chen, X.Y., Jian, C.R.: Gene expression data clustering based on graph regularized subspace segmentation. Neurocomputing 143, 44–50 (2014)

    Article  Google Scholar 

  27. Lu, C.Y., Min, H., Zhao, Z.Q., et al.: Robust and efficient subspace segmentation via least squares regression. Eur. Conf. Comput. Vis. 7578(1), 347–360 (2012)

    Google Scholar 

  28. Lu, C., Feng, J., Yan, S., et al.: A unified alternating direction method of multipliers by majorization minimization. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 527–541 (2018)

    Article  Google Scholar 

  29. Bezdek, J.C., Hathaway, R.J.: Convergence of alternating optimization. Neural Parallel Sci. Comp. 11(4), 351–368 (2003)

    MathSciNet  MATH  Google Scholar 

  30. Shawe-Taylor, J., Cristianini, N.: Kernel Method for Pattern Analysis. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  31. Zheng S., Zhang J., Huang K., et al.: Robust view transformation model for gait recognition. The 18th IEEE International Conference on Image Processing, pp. 2073–2076. IEEE (2011). https://kylezheng.org/software/

  32. Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. Int. Conf. Pattern Recogn. 4, 441–444 (2006)

    Google Scholar 

  33. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by National Natural Science Foundation of China (Grant nos. 71273053, 11571074) and Natural Science Foundation of Fujian Province (Grant no. 2018J01666).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyun Chen.

Additional information

Communicated by B. Prabhakaran.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Derivation of transforming the objective function of the proposed M2LPP-SL into problem (12) when SXY is updated and fixed:

$$\begin{aligned} J(P_{\text{X}} ,P_{\text{Y}} ) & = \frac{1}{2}\sum\limits_{i,p} {\left\| {P_{\text{X}}^{\text{T}} \varPhi (x_{i} ) - P_{\text{X}}^{\text{T}} \varPhi (x_{p} )} \right\|}^{2} S_{ip}^{\text{X}} + \frac{1}{2}\sum\limits_{j,q} {\left\| {P_{\text{Y}}^{\text{T}} \varPhi (y_{j} ) - P_{\text{Y}}^{\text{T}} \varPhi (y_{q} )} \right\|}^{2} S_{jq}^{\text{Y}} \\ &\quad \;\,{ + }\sum\limits_{i,j} {\left\| {P_{\text{X}}^{\text{T}} \varPhi (x_{i} ) - P_{\text{Y}}^{\text{T}} \varPhi (y_{j} )} \right\|}^{2} S_{ij}^{\text{XY}} \;\;\;\;\;\;\;\;\; \\ & = \;\frac{1}{2} \times 2{\text{Tr}}\left[ {P_{\text{X}}^{\text{T}} \varPhi (X)(D^{\text{X}} - S^{\text{X}} )\varPhi (X)^{\text{T}} P_{\text{X}} } \right]{ + }\frac{1}{2} \times 2{\text{Tr}}\left[ {P_{\text{Y}}^{\text{T}} \varPhi (Y)(D^{\text{Y}} - S^{\text{Y}} )\varPhi (Y)^{\text{T}} P_{\text{Y}} } \right] \\ & \quad\;\;{ + }\;{\text{Tr}}\left[ {P_{\text{X}}^{\text{T}} \varPhi (X)D^{{ ( {\text{XY)}}_{\text{r}} }} \varPhi (X)^{\text{T}} P_{\text{X}} { + }P_{\text{Y}}^{\text{T}} \varPhi (Y)D^{{ ( {\text{XY)}}_{\text{c}} }} \varPhi (Y)^{\text{T}} P_{\text{Y}} } \right. \\ & \quad\;\;\left. { - P_{\text{X}}^{\text{T}} \varPhi (X)S^{\text{XY}} \varPhi (Y)^{\text{T}} P_{\text{Y}} \; - P_{\text{Y}}^{\text{T}} \varPhi (Y)(S^{\text{XY}} )^{\text{T}} \varPhi (X)^{\text{T}} P_{\text{X}} } \right] \\ & { = }\;{\text{Tr}}\left[ {P_{\text{X}}^{\text{T}} \varPhi (X)\left( {D^{{ ( {\text{XY)}}_{\text{r}} }} + D^{\text{X}} - S^{\text{X}} } \right)\varPhi (X)^{\text{T}} P_{\text{X}} + P_{\text{Y}}^{\text{T}} \varPhi (Y)\left( {D^{{ ( {\text{XY)}}_{\text{c}} }} + D^{\text{Y}} - S^{\text{Y}} } \right)\varPhi (Y)^{\text{T}} P_{\text{Y}} } \right. \\ &\quad \;\;\left. { - P_{\text{X}}^{\text{T}} \varPhi (X)S^{\text{XY}} \varPhi (Y)^{\text{T}} P_{\text{Y}} - P_{\text{Y}}^{\text{T}} \varPhi (Y)(S^{\text{XY}} )^{\text{T}} \varPhi (X)^{\text{T}} P_{\text{X}} } \right] \\ & = \;{\text{Tr}}\left\{ {\left[ {\begin{array}{*{20}c} {P_{\text{X}}^{\text{T}} \varPhi (X)} & {P_{\text{Y}}^{\text{T}} \varPhi (Y)} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {D^{{ ( {\text{XY)}}_{\text{r}} }} + D^{\text{X}} } & {} \\ {} & {D^{{ ( {\text{XY)}}_{\text{c}} }} + D^{\text{Y}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\varPhi (X)^{\text{T}} P_{\text{X}} } \\ {\varPhi (Y)^{\text{T}} P_{\text{Y}} } \\ \end{array} } \right]} \right\} \\ & \;\; - {\text{Tr}}\left\{ {\left[ {\begin{array}{*{20}c} {P_{\text{X}}^{\text{T}} \varPhi (X)} & {P_{\text{Y}}^{\text{T}} \varPhi (Y)} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {S^{\text{X}} } & {S^{\text{XY}} } \\ {(S^{\text{XY}} )^{\text{T}} } & {S^{\text{Y}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\varPhi (X)^{\text{T}} P_{\text{X}} } \\ {\varPhi (Y)^{\text{T}} P_{\text{Y}} } \\ \end{array} } \right]} \right\}, \\ \end{aligned}$$
(20)

where SX = [\(s_{ip}^{\text{X}}\)]n × n is the similarity matrix within X-view, the degree matrix with \(d_{ii}^{\text{X}} = \sum\nolimits_{j} {s_{ij}^{\text{X}} }\). SY = [\(s_{jq}^{\text{Y}}\)]m × m is the similarity matrix within Y-view, the degree matrix \(D^{\text{Y}} = {\text{diag}}(d_{11}^{\text{Y}} ,d_{22}^{\text{Y}} ,\ldots,d_{nn}^{\text{Y}} )\) with \(d_{ii}^{\text{Y}} = \sum\nolimits_{j} {s_{ij}^{\text{Y}} }\). SXY = [\(s_{ij}^{\text{XY}}\)]n × m is the cross-view similarity matrix. D(XY)r and D(XY)c are diagonal matrices; their entries are row and column sums of SXY.

Let \(P = \left[ {\begin{array}{*{20}c} {P_{\text{X}} } \\ {P_{\text{Y}} } \\ \end{array} } \right]\), \(\varPhi (Z) = \varPhi (X) \oplus \varPhi (Y) = \left[ {\begin{array}{*{20}c} {\varPhi (X)} & {} \\ {} & {\varPhi (Y)} \\ \end{array} } \right]\), \(S = \left[ {\begin{array}{*{20}c} {S^{\text{X}} } & {S^{\text{XY}} } \\ {(S^{\text{XY}} )^{\text{T}} } & {S^{\text{Y}} } \\ \end{array} } \right]\), \(D = {\text{diag}}(d_{11}^{{}} ,d_{22}^{{}} ,\ldots,d_{(m + n)(m + n)}^{{}} )\) with \(d_{ii} = \sum\nolimits_{j} {S_{ij} }\), and \(L = D - S\). Then Eq. (20) can be transformed into:

$$\begin{aligned} J(P_{\text{X}} ,P_{\text{Y}} ) & { = }\;{\text{Tr}}\left[ \begin{aligned} P_{\text{X}} \hfill \\ P_{\text{Y}} \hfill \\ \end{aligned} \right]^{\text{T}} \left[ {\begin{array}{*{20}c} {\varPhi (X)} & {} \\ {} & {\varPhi (Y)} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {D^{\text{X}} + D^{{({\text{XY}})_{\text{r}} }} - S^{\text{X}} } & { - S^{\text{XY}} } \\ { - (S^{\text{XY}} )^{\text{T}} } & {D^{\text{Y}} + D^{{({\text{XY}})_{\text{c}} }} - S^{\text{Y}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\varPhi (X)} & {} \\ {} & {\varPhi (Y)} \\ \end{array} } \right]^{\text{T}} \left[ \begin{aligned} P_{\text{X}} \hfill \\ P_{\text{Y}} \hfill \\ \end{aligned} \right] \\ & = \;{\text{Tr(}}P^{\text{T}} \varPhi (Z)(D - S)\varPhi^{\text{T}} (Z)P )\\ & { = }\;{\text{Tr(}}P^{\text{T}} \varPhi (Z)L\varPhi^{\text{T}} (Z)P ).\\ \end{aligned}$$
(21)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, X., Kang, Y. & Chen, Z. Multi-nonlinear multi-view locality-preserving projection with similarity learning for random cross-view gait recognition. Multimedia Systems 26, 727–744 (2020). https://doi.org/10.1007/s00530-020-00685-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-020-00685-2

Keywords

Navigation