Abstract
Non-interactive biometric systems have gained an enormous interest from computer vision researchers as they provide more efficient and reliable ways of identification and authorization from a distance. Gait and face recognition are types of non-interactive biometric systems without users’ cooperation with the surveillance system. On the contrary to face recognition, gait recognition can manage low-resolution and low-brightness images. It aims to know the individuals based on their style and way of walking. Gait recognition has numerous applications in several domains, such as healthcare monitoring, security systems, and surveillance systems for indoor and outdoor activities. Yet, gait recognition performance is frequently deteriorated by some variety of factors, such as viewing angle variations, and clothing changes. Recently, deep learning models have been employed efficiently in gait recognition systems. They are more generic, since the feature construction process is completely automated. This paper presents gait features measured automatically in the midst of walking for the recognition system. To extract these features from a video of a moving object, two vital modules are used, namely the motion detection and tracking, and the feature extraction. Accordingly, the principal module serves to distinguish the walking style in an image sequence or video. A background subtraction technique is executed to fragment the movement of the background, and the moving area related to the spatial silhouette is correctly tracked and segmented. The second module “Feature Extraction” is used to extract the features from the sequence of silhouette images. The gait cycle is calculated from the shape changes of the silhouettes, and it is used to construct a small sequence of Gait Energy Images (GEI). The optical flow of the GEI is measured to extract only the moving parts and exclude the static ones. Finally, the Convolution Neural Network (CNN) is fed with the optical flow output to build unique features. These features are used for neural network training, and evaluation is performed on popular gait benchmark datasets. The obtained results reveal an accuracy level of 95% with more resistance to view and probe changes.






Similar content being viewed by others
References
Bashir K, Xiang T, Gong S (2009) Gait recognition using gait entropy image
BenAbdelkader C, Cutler R, Davis L (2002) Stride and cadence as a biometric in automatic person identification and verification. In: Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on. IEEE
CASIA Gait Database. Available from: http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp.
Collins RT, Gross R, Shi J (2002) Silhouette-based human identification from body shape and gait. In: Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on. IEEE
Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proc. of Scandinavian Conf. on Image Analysis, volume 2749, p 363–370
Hinton GE, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Hu H (2013) Enhanced gabor feature based classification using a regularized locally tensor discriminant model for multiview gait recognition. IEEE Trans Circuits Syst Video Technol 23(7):1274–1286
Hu M et al (2013) Incremental learning for video-based gait recognition with LBP flow. IEEE Transactions on Cybernetics 43(1):77–89
Jeevan M, et al (2013) Gait recognition based on gait pal and pal entropy image. in Image Processing (ICIP), 2013 20th IEEE International Conference on. IEEE
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems
Kusakunniran W et al (2012) Gait recognition under various viewing angles based on correlated motion regression. IEEE Trans Circuits Syst Video Technol 22(6):966–980
Kusakunniran W et al (2014) Recognizing gaits across views through correlated motion co-clustering. IEEE Trans Image Process 23(2):696–709
Le QV, et al (2011) Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE
Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. In: IJCAI (Vol. 2015, p 1617–1623)
Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) Fortune teller: predicting your career path. In: AAAI (Vol. 2016, p 201–207)
Liu Y et al (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115
Sarkar S et al (2005) The humanid gait challenge problem: data sets, performance, and analysis. IEEE Trans Pattern Anal Mach Intell 27(2):162–177
Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems
Takemura N, Makihara Y, Muramatsu D, Echigo T, Yagi Y (2018) On input/output architectures for convolutional neural network-based cross-view gait recognition. In: IEEE Transactions on Circuits and Systems for Video Technology
Uddin MZ, Khaksar W, Torresen J (2017) A robust gait recognition system using spatiotemporal features and deep learning, 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Daegu, p 156–161
Wang L et al (2003) Silhouette analysis-based gait recognition for human identification. IEEE Trans Pattern Anal Mach Intell 25(12):1505–1518
Wu Z, Huang Y, Wang L, Wang X, Tan T (2017) A comprehensive study on cross-view gait based human identification with deep CNNs. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 2, p 209–226
Yan C, Zhang B, Coenen F (2015) Multi-attributes gait identification by convolutional neural networks. In: Image and Signal Processing (CISP), 2015 8th International Congress on. IEEE
Yang Y-H, Levine MD (1992) The background primal sketch: an approach for tracking moving objects. Mach Vis Appl 5(1):17–34
Zhang Z, Hu M, Wang Y (2011) A survey of advances in biometric gait recognition. In: Chinese Conference on Biometric Recognition. Springer
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Hawas, A.R., El-Khobby, H.A., Abd-Elnaby, M. et al. Gait identification by convolutional neural networks and optical flow. Multimed Tools Appl 78, 25873–25888 (2019). https://doi.org/10.1007/s11042-019-7638-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-019-7638-9