Abstract
One of the imminent challenges for assistive robots in learning human activities while observing a human perform a task is how to define movement representations (states). This has been recently explored for improved solutions. This paper proposes a method of extracting key frames (or poses) of human activities from skeleton joint coordinates information obtained using an RGB-D Camera (Depth Sensor). The motion energy (kinetic energy) of each pose in an activity sequence is computed and a novel approach is proposed for extracting key pose locations that define an activity using moving average crossovers of computed pose kinetic energy. This is important as not all frames of an activity sequence are key in defining the activity. In order to evaluate the reliability of extracted key poses, Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) which is capable to learn a sequence of transition from states in an activity is applied in classifying activities from identified key poses. This is important for assistive robots to identify key human poses and states transition in order to correctly carry out human activities. Some preliminary experimental results are presented to illustrate the proposed methodology.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adama, D.A., Lotfi, A., Langensiepen, C., Lee, K.: Human activities transfer learning for assistive robotics, pp. 253–264. Springer, Cardiff (2018)
Adama, D.A., Lotfi, A., Langensiepen, C., Lee, K., Trindade, P.: Learning human activities for assisted living robotics. In: Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, Island of Rhodes, Greece, PETRA 2017, pp. 286–292. ACM (2017)
Bemelmans, R., Gelderblom, G.J., Jonker, P., de Witte, L.: Socially assistive robots in elderly care: a systematic review into effects and effectiveness. J. Am. Med. Dir. Assoc. 13(2), 114–120.e1 (2012)
Chaaraoui, A.A., Padilla-López, J.R., Flórez-Revuelta, F.: Fusion of skeletal and silhouette-based features for human action recognition with RGB-D devices. In: Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013, pp. 91–97 (2013)
Espingardeiro, A.: Social assistive robots, reframing the human robotics interaction benchmark of social success. Int. J. Soc. Behav. Educ. Econ. Bus. Ind. Eng. 9(1), 377–382 (2015)
Gaglio, S., Re, G.L., Morana, M.: Human activity recognition process using 3-D posture data. IEEE Trans. Hum. Mach. Syst. 45(5), 586–597 (2015)
Gupta, R., Chia, A.Y.-S., Rajan, D.: Human activities recognition using depth images. In: Proceedings of the 21st ACM International Conference on Multimedia, pp. 283–292 (2013)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Koskinopoulou, M., Piperakis, S., Trahanias, P.: Learning from demonstration facilitates human-robot collaborative task execution. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 59–66 (2016)
Lipton, Z.C., Kale, D.C., Elkan, C., Wetzel, R.C.: Learning to diagnose with LSTM recurrent neural networks. CoRR abs/1511.03677 (2015)
Nez, J.C., Cabido, R., Pantrigo, J.J., Montemayor, A.S., Vlez, J.F.: Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit. 76, 80–94 (2018)
Nunes, U.M., Faria, D.R., Peixoto, P.: A human activity recognition framework using max-min features and key poses with differential evolution random forests classifier. Pattern Recognit. Lett. 99, 21–31 (2017)
Shan, J., Akella, S.: 3D human action segmentation and recognition using pose kinetic energy. In: 2014 IEEE International Workshop on Advanced Robotics and its Social Impacts, pp. 69–75, September 2014
Zhu, G., Zhang, L., Shen, P., Song, J., Zhi, L., Yi, K.: Human action recognition using key poses and atomic motions. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1209–1214 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Adama, D.A., Lotfi, A., Langensiepen, C. (2019). Key Frame Extraction and Classification of Human Activities Using Motion Energy. In: Lotfi, A., Bouchachia, H., Gegov, A., Langensiepen, C., McGinnity, M. (eds) Advances in Computational Intelligence Systems. UKCI 2018. Advances in Intelligent Systems and Computing, vol 840. Springer, Cham. https://doi.org/10.1007/978-3-319-97982-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-97982-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97981-6
Online ISBN: 978-3-319-97982-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)