Abstract
In this paper, we use data from the Microsoft Kinect sensor that processes the captured image of a person, thus, reducing the number of data in just joints on each frame. Then, we propose a creation of an image from all the frames removed from the movement, which facilitates training in a convolutional neural network. Finally, we trained a CNN using two different forms of training: combined training and individual training using the MSRC-12 dataset. Thus, the trained network obtained an accuracy rate of 86.67% in combined training and 90.78% of accuracy rate in the individual training, which is a very good performance compared to related works. This demonstrates that networks based on convolutional networks can be effective for the recognition of human actions using joints.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
MSRC-12 dataset. https://www.microsoft.com/en-us/download/details.aspx?id=52283. Accessed 21 Aug 2018
Salvador, S., Chan, P.: FastDTW: toward accurate dynamic time warping in linear time and space. Intell. Data Anal. 11(5), 561–580 (2007)
Xia, L., Chen, C.C., Aggarwal, J.K.: View invariant human action recognition using histograms of 3D joints. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, pp. 20–27 (2012)
Piyathilaka, L., Kodagoda, S.: Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features. In: IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), Melbourne, VIC, pp. 567–572 (2013)
Althloothi, S., Mahoor, M.H., Zhang, X., Voyles, R.M.: Human activity recognition using multi-features and multiple kernel learning. Pattern Recogn. 47(5), 1800–1812 (2014)
Du, Y., Fu, Y., Wang, L.: Representation learning of temporal dynamics for skeleton-based action recognition. IEEE Trans. Image Process. 25(7), 3010–3022 (2016)
Ke, Q., An, S., Bennamoun, M., Sohel, F., Boussaid, F.: SkeletonNet: mining deep part features for 3-D action recognition. IEEE Signal Process. Lett. 24(6), 731–735 (2017)
Mo, L., Li, F., Zhu, Y., Huang, A.: Human physical activity recognition based on computer vision with deep learning model. In: IEEE International Instrumentation and Measurement Technology Conference Proceedings, Taipei, pp. 1–6 (2016)
Hou, Y., Li, Z., Wang, P., Li, W.: Skeleton optical spectra-based action recognition using convolutional neural networks. IEEE Trans. Circ. Syst. Video Technol. 28(3), 807–811 (2018)
Jiang, X., Zhong, F., Peng, Q., Qin, X.: Online robust action recognition based on a hierarchical model 30, 1021 (2014). https://doi.org/10.1007/s00371-014-0923-8
Sharaf, A., Torki, M., Hussein, M.E., El-Saban, M.: Real-time multi-scale action detection from 3D skeleton data. In: IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, pp. 998–1005 (2015)
Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., Lew, M.S.: Deep learning for visual understanding: a review. Neurocomputing 187, 27–48 (2016). Recent Developments on Deep Big Vision
Zeiler, M.: Hierarchical convolutional deep learning in computer vision. Ph.D. thesis, New York University (2014)
MartÃn, A.: TensorFlow: learning functions at scale. ACM SIGPLAN Not. 51, 1 (2016). https://doi.org/10.1145/3022670.2976746
Wu, F., Hu, P., Kong, D.: Flip-Rotate-Pooling Convolution and Split Dropout on Convolution Neural Networks for Image Classification (2015). arXiv:1507.08754v1
Nguyen, D., Le, H.: Kinect gesture recognition: SVM vs. RVM. In: Seventh International Conference on Knowledge and Systems Engineering (KSE), Ho Chi Minh City, pp. 395–400 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Pfitscher, M., Welfer, D., de Souza Leite Cuadros, M.A., Gamarra, D.F.T. (2020). Activity Gesture Recognition on Kinect Sensor Using Convolutional Neural Networks and FastDTW for the MSRC-12 Dataset. In: Abraham, A., Cherukuri, A.K., Melin, P., Gandhi, N. (eds) Intelligent Systems Design and Applications. ISDA 2018 2018. Advances in Intelligent Systems and Computing, vol 940. Springer, Cham. https://doi.org/10.1007/978-3-030-16657-1_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-16657-1_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16656-4
Online ISBN: 978-3-030-16657-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)