Abstract:
View Transformation Model(VTM) is a widely used method to solve the multi-view problem in gait recognition. But accuracy loss always occurs during the view transformation...Show MoreMetadata
Abstract:
View Transformation Model(VTM) is a widely used method to solve the multi-view problem in gait recognition. But accuracy loss always occurs during the view transformation procedure, especially when the difference of viewing angles between two gait features grows. On one hand, faced with this difficulty, 2D Enhanced GEI(2D-EGEI) is proposed to extract effective gait features by using the reconstruction of 2DPCA. On the other hand, Nonnegative Matrix Factorization(NMF) is adopted to learn local structured features for supplying accuracy loss. Moreover, 2D Linear Discriminant Analysis(2DLDA) is introduced to project features into a discriminant space to improve classification ability. Compared with two deep learning methods, experimental results prove that the proposed method significantly outperforms the Stack Aggressive Auto-Encoder(SPAE) method, and could get close to the deep CNN network method.
Published in: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA)
Date of Conference: 11-12 January 2018
Date Added to IEEE Xplore: 12 March 2018
ISBN Information: