Skip to main content

Humanoid Robot Control Based on Deep Learning

  • Conference paper
  • First Online:
E-Learning and Games (Edutainment 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11462))

Included in the following conference series:

  • 1272 Accesses

Abstract

The direct control of humanoid robot by human motion is an important aspect of current research. Most of these methods are based on additional equipments, such as Kinect, which are usually not equipped on robot. In order to avoid using these external equipments, we explored a robot controlling method only using the low-resolution camera on robot. Firstly, a stacked hourglass network is employed to obtain the accurate 2D heatmap containing positions of human joints from RGB image captured by camera on robot. Then, 3D human poses including coordinates of human body joints are estimated from 2D heatmaps by a method aiming to reconstruct 3D human poses from 2D poses. Finally, the rotation angles of robot are computed according to these 3D coordinates and are transmitted to the robot to reconstruct the original human pose. Using the NAO robot as an example, the experimental results show that the humanoid robot can imitate motions of different human actors in different scenes well while applying our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdallah, I.B., Bouteraa, Y., Rekik, C.: Kinect-based sliding mode control for Lynxmotion robotic arm. Adv. Hum.-Comput. Interact. 2016, 1–10 (2016)

    Article  Google Scholar 

  2. Guo, M., Das, S., Bumpus, J., Bekele E., Sarkar, N.: Interfacing of kinect motion sensor and NAO humanoid robot for imitation learning. Young Scientist (2013)

    Google Scholar 

  3. Nie, B.X., Xiong, C., Zhu, S.: Joint action recognition and pose estimation from video. In: Computer Vision and Pattern Recognition, pp. 1293–1301 (2015)

    Google Scholar 

  4. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Neural Information Processing Systems, pp. 568–576 (2014)

    Google Scholar 

  5. Chen, Y., Shen, C.H., Liu, L.Q., Yang, J., Wei, X.S.: Adversarial PoseNet: a structure-aware convolutional network for human pose estimation. In: IEEE International Conference on Computer Vision, pp. 1221–1230 (2017)

    Google Scholar 

  6. Alwasel, A., Elrayes, K., Abdel-Rahman, E., Haas, C.: A human body posture sensor for monitoring and diagnosing MSD risk factors. In: Proceedings of the 30th ISARC, Montreal, Canada, pp. 531–539 (2013)

    Google Scholar 

  7. Sharma, R.P., Verma, G.K.: Human computer interaction using hand gesture. Procedia Comput. Sci. 54, 721–727 (2015)

    Article  Google Scholar 

  8. Sapp, B., Taskar, B.: MODEC: multimodal decomposable models for human pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3674–3681 (2013)

    Google Scholar 

  9. Pishchulin, L., Andriluka, M., Gehler, P.V., Schiele, B.: Strong appearance and expressive spatial models for human pose estimation. In: International Conference on Computer Vision, pp. 3487–3494 (2013)

    Google Scholar 

  10. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2878–2890 (2013)

    Article  Google Scholar 

  11. Toshev, A., Szegedy, C.: DeepPose: human pose estimation via deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1653–1660 (2014)

    Google Scholar 

  12. Wei, S., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016)

    Google Scholar 

  13. Pfister, T., Charles, J., Zisserman, A.: Flowing ConvNets for human pose estimation in videos. In: International Conference on Computer Vision, pp. 1913–1921 (2015)

    Google Scholar 

  14. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_29

    Chapter  Google Scholar 

  15. Andriluka, M., Pishchulin, L., Gehler, P., Schiele, B.: 2D human pose estimation: new benchmark and state of the art analysis. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3686–3693 (2014)

    Google Scholar 

  16. Li, S., Chan, A.B.: 3D human pose estimation from monocular images with deep convolutional neural network. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9004, pp. 332–347. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16808-1_23

    Chapter  Google Scholar 

  17. Zhou, X., Zhu, M., Leonardos, S., Daniilidis, K.: Sparse representation for 3D shape estimation: a convex relaxation approach. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1648–1661 (2017)

    Article  Google Scholar 

  18. Tekin, B., Rozantsev, A., Lepetit, V.: Direct prediction of 3D body poses from motion compensated sequences. In: Computer Vision and Pattern Recognition, pp. 991–1000 (2016)

    Google Scholar 

  19. Chen, X., Yuille, A.: Articulated pose estimation by a graphical model with image dependent pairwise relations. In: Neural Information Processing Systems, pp. 1736–1744 (2014)

    Google Scholar 

  20. Tompson, J., et al.: Joint training of a convolutional network and a graphical model for human pose estimation. In: Advances in Neural Information Processing Systems, pp. 1799–1807 (2014)

    Google Scholar 

  21. Yasin, H., Yasin, H., Iqbal, U., Kruger, B., Weber, A., Gall, J.: A dual-source approach for 3D pose estimation from a single image. In: Computer Vision and Pattern Recognition, pp. 4948–4956 (2016)

    Google Scholar 

  22. Zhu, Y., Huang, D., De La Torre, F., Lucey, S.: Complex non-rigid motion 3D reconstruction by union of subspaces. In: Computer Vision and Pattern Recognition, pp. 1542–1549 (2014)

    Google Scholar 

  23. Zhou, X., Zhu, M., Leonardos, S., Derpanis, K.G., Daniilidis, K.: Sparseness meets deepness: 3D human pose estimation from monocular video. In: Computer Vision and Pattern Recognition, pp. 4966–4975 (2016)

    Google Scholar 

  24. Akhter, I., Black, M.J.: Pose-conditioned joint angle limits for 3D human pose reconstruction. In: Computer Vision and Pattern Recognition, pp. 1446–1455 (2015)

    Google Scholar 

  25. Ramakrishna, V., Kanade, T., Sheikh, Y.: Reconstructing 3D human pose from 2D image landmarks. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 573–586. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33765-9_41

    Chapter  Google Scholar 

  26. Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 36, 1325–1339 (2014)

    Article  Google Scholar 

Download references

Acknowledgment

This work is supported by the Liaoning Distinguished Professor, the Liaoning Province Doctor Startup Fund (No. 201601302); the Hunan Provincial Natural Science Fund Project (No. 2015JJ6028); Excellent Youth Project of Hunan Education Department (No. 16B065); by the Science and Technology Innovation Fund of Dalian (No. 2018J12GX036), and by the High-level talent innovation support project of Dalian (No. 2017RD11); Equipment Pre-research Foundation for Key Laboratory of National Defense Science and Technology (No. 614222202040571).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Pengfei Yi or Dongsheng Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, B., Yi, P., Zhou, D., Wei, X. (2019). Humanoid Robot Control Based on Deep Learning. In: El Rhalibi, A., Pan, Z., Jin, H., Ding, D., Navarro-Newball, A., Wang, Y. (eds) E-Learning and Games. Edutainment 2018. Lecture Notes in Computer Science(), vol 11462. Springer, Cham. https://doi.org/10.1007/978-3-030-23712-7_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23712-7_52

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23711-0

  • Online ISBN: 978-3-030-23712-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics