Abstract
The direct control of humanoid robot by human motion is an important aspect of current research. Most of these methods are based on additional equipments, such as Kinect, which are usually not equipped on robot. In order to avoid using these external equipments, we explored a robot controlling method only using the low-resolution camera on robot. Firstly, a stacked hourglass network is employed to obtain the accurate 2D heatmap containing positions of human joints from RGB image captured by camera on robot. Then, 3D human poses including coordinates of human body joints are estimated from 2D heatmaps by a method aiming to reconstruct 3D human poses from 2D poses. Finally, the rotation angles of robot are computed according to these 3D coordinates and are transmitted to the robot to reconstruct the original human pose. Using the NAO robot as an example, the experimental results show that the humanoid robot can imitate motions of different human actors in different scenes well while applying our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abdallah, I.B., Bouteraa, Y., Rekik, C.: Kinect-based sliding mode control for Lynxmotion robotic arm. Adv. Hum.-Comput. Interact. 2016, 1–10 (2016)
Guo, M., Das, S., Bumpus, J., Bekele E., Sarkar, N.: Interfacing of kinect motion sensor and NAO humanoid robot for imitation learning. Young Scientist (2013)
Nie, B.X., Xiong, C., Zhu, S.: Joint action recognition and pose estimation from video. In: Computer Vision and Pattern Recognition, pp. 1293–1301 (2015)
Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Neural Information Processing Systems, pp. 568–576 (2014)
Chen, Y., Shen, C.H., Liu, L.Q., Yang, J., Wei, X.S.: Adversarial PoseNet: a structure-aware convolutional network for human pose estimation. In: IEEE International Conference on Computer Vision, pp. 1221–1230 (2017)
Alwasel, A., Elrayes, K., Abdel-Rahman, E., Haas, C.: A human body posture sensor for monitoring and diagnosing MSD risk factors. In: Proceedings of the 30th ISARC, Montreal, Canada, pp. 531–539 (2013)
Sharma, R.P., Verma, G.K.: Human computer interaction using hand gesture. Procedia Comput. Sci. 54, 721–727 (2015)
Sapp, B., Taskar, B.: MODEC: multimodal decomposable models for human pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3674–3681 (2013)
Pishchulin, L., Andriluka, M., Gehler, P.V., Schiele, B.: Strong appearance and expressive spatial models for human pose estimation. In: International Conference on Computer Vision, pp. 3487–3494 (2013)
Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2878–2890 (2013)
Toshev, A., Szegedy, C.: DeepPose: human pose estimation via deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1653–1660 (2014)
Wei, S., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016)
Pfister, T., Charles, J., Zisserman, A.: Flowing ConvNets for human pose estimation in videos. In: International Conference on Computer Vision, pp. 1913–1921 (2015)
Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_29
Andriluka, M., Pishchulin, L., Gehler, P., Schiele, B.: 2D human pose estimation: new benchmark and state of the art analysis. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3686–3693 (2014)
Li, S., Chan, A.B.: 3D human pose estimation from monocular images with deep convolutional neural network. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9004, pp. 332–347. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16808-1_23
Zhou, X., Zhu, M., Leonardos, S., Daniilidis, K.: Sparse representation for 3D shape estimation: a convex relaxation approach. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1648–1661 (2017)
Tekin, B., Rozantsev, A., Lepetit, V.: Direct prediction of 3D body poses from motion compensated sequences. In: Computer Vision and Pattern Recognition, pp. 991–1000 (2016)
Chen, X., Yuille, A.: Articulated pose estimation by a graphical model with image dependent pairwise relations. In: Neural Information Processing Systems, pp. 1736–1744 (2014)
Tompson, J., et al.: Joint training of a convolutional network and a graphical model for human pose estimation. In: Advances in Neural Information Processing Systems, pp. 1799–1807 (2014)
Yasin, H., Yasin, H., Iqbal, U., Kruger, B., Weber, A., Gall, J.: A dual-source approach for 3D pose estimation from a single image. In: Computer Vision and Pattern Recognition, pp. 4948–4956 (2016)
Zhu, Y., Huang, D., De La Torre, F., Lucey, S.: Complex non-rigid motion 3D reconstruction by union of subspaces. In: Computer Vision and Pattern Recognition, pp. 1542–1549 (2014)
Zhou, X., Zhu, M., Leonardos, S., Derpanis, K.G., Daniilidis, K.: Sparseness meets deepness: 3D human pose estimation from monocular video. In: Computer Vision and Pattern Recognition, pp. 4966–4975 (2016)
Akhter, I., Black, M.J.: Pose-conditioned joint angle limits for 3D human pose reconstruction. In: Computer Vision and Pattern Recognition, pp. 1446–1455 (2015)
Ramakrishna, V., Kanade, T., Sheikh, Y.: Reconstructing 3D human pose from 2D image landmarks. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 573–586. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33765-9_41
Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 36, 1325–1339 (2014)
Acknowledgment
This work is supported by the Liaoning Distinguished Professor, the Liaoning Province Doctor Startup Fund (No. 201601302); the Hunan Provincial Natural Science Fund Project (No. 2015JJ6028); Excellent Youth Project of Hunan Education Department (No. 16B065); by the Science and Technology Innovation Fund of Dalian (No. 2018J12GX036), and by the High-level talent innovation support project of Dalian (No. 2017RD11); Equipment Pre-research Foundation for Key Laboratory of National Defense Science and Technology (No. 614222202040571).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Guo, B., Yi, P., Zhou, D., Wei, X. (2019). Humanoid Robot Control Based on Deep Learning. In: El Rhalibi, A., Pan, Z., Jin, H., Ding, D., Navarro-Newball, A., Wang, Y. (eds) E-Learning and Games. Edutainment 2018. Lecture Notes in Computer Science(), vol 11462. Springer, Cham. https://doi.org/10.1007/978-3-030-23712-7_52
Download citation
DOI: https://doi.org/10.1007/978-3-030-23712-7_52
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23711-0
Online ISBN: 978-3-030-23712-7
eBook Packages: Computer ScienceComputer Science (R0)