Skip to main content

Advertisement

Log in

Humanoid robot runs maze mode using depth-first traversal algorithm

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper focuses on the humanoid robot walking in the maze. In this research, we proposed the depth-first traversal algorithm for the maze searching with the single-view model and sonar obstacle avoidance theory then follow the “turn right first” principle to successfully avoid obstacles and efficiently walk out of the maze. The superiority of the proposed algorithm is that it can be for various complex mazes. In the three-dimensional maze, the visual system of the NAO robot was firstly used to perceive its surrounding environment, and then the image processing technology was used to identify the position of the surrounding obstacles. After that, the NAO robot can successfully avoid obstacles and walk out of the maze. Today, intelligent robots have a wide range of applications. In order to allow them to quickly integrate into our daily lives, they need to be able to recognize obstacles and walk freely like humans. This requires robots equipped with image processing technology which is able to help robots identify obstacles. During walking, robot will comply right turn in the first. We will use sonar to perceive the obstacles on the left and right sides. And at the turn image processing will be used probe obstacles at right and left. Finally, they will preserve memory of what they have been walked. The experimental results indicate that this method provides a reliable guarantee for the NAO robot to successfully avoid obstacles and get out of the maze.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

References

  1. Aldana-Murillo NG, Hayet JB, Becerra HM (2015) Evaluation of Local Descriptors for Vision-Based Localization of Humanoid Robots." Mexican Conference on Pattern Recognition Springer International Publishing:179–189

  2. Aldebaran Robotic. Nao Software 1.12.5 documentation. Only available online:www. aldebaran-robotics. com/documentation, 2022.

  3. Alexiadis DS, Zarpalas D (2013) Real-time, full 3-D reconstruction of moving foreground objects from multiple consumer depth cameras. IEEE Trans Mult 15(2):339–358

    Article  Google Scholar 

  4. Antonelli G, Chiaverini S, Fusco G (2007) A fuzzy-logic based approach for Mobile robot path tracking. IEEE Trans Fuzzy Syst 15(2):211–221

    Article  Google Scholar 

  5. Armesto L, Tornero J (2009) “Automation of industrial vehicles: A vision based line tracking application,” Proc. IEEE Conf Emerg Technol Factory Autom. 1–7.

  6. Bazylev D, Popchenko F, Ibraev D et al (2017) Humanoid robot walking on track using computer vision. Control Automation IEEE:1310–1315

  7. Brandão AS, Martins FN, Soneguetti HB (2015) A vision-based line following strategy for an autonomous UAV. Int Conf Inform Control, Autom Robot IEEE:314–319

  8. Chapuis R, Chapuis R (2010) Map aided localization and vehicle guidance using an active landmark search. Inform Fusion 11(3):283–296

    Article  MathSciNet  Google Scholar 

  9. Chen Y, Tao J, Liu L, Xiong J, Xia R, Xie J, Zhang Q, Yang K (2020) Saliency detection via improved hierarchical principle component analysis method, WCMC, 2020. Hindawi

  10. De San Bernabe A, Dios MD, Ollero A (2016) Efficient Integration of RSSI for Tracking using Wireless Camera Networks. Inform Fusion 36

  11. Delfin J, Becerra HM, Arechavaleta G (2014) Visual path following using a sequence of target images and smooth robot velocities for humanoid navigation. Ieee-Ras Int Conf Humanoid Robots IEEE:354–359

  12. Du X, Tan KK, Htet KKK (2015) Vision-based lane line detection for autonomous vehicle navigation and guidance. Control Conf IEEE:1–5

  13. Faragasso A et al (2016) Vision-based corridor navigation for humanoid robots. IEEE Int Conf Robotics Autom IEEE:3190–3195

  14. Han S, Kim MS, Hong SP (2012) Open software platform for Robotic services. IEEE Trans Autom Sci Eng 9(3):467–481

    Article  Google Scholar 

  15. Hornung A, Bennewitz M, Strasdat H (2010) Efficient vision-based navigation. Auton Robot 29(2):137–149

    Article  Google Scholar 

  16. Lin H, Wang Y, Yi Z (2012) Dynamics analysis based on ADAMS manipulator. Manuf Autom 34(22):80–83

    Google Scholar 

  17. Lu Y, Song D (2017) Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Trans Robot 31(3):736–749

    Article  Google Scholar 

  18. Luo B et al (2015) Research on mobile robot path tracking based on color vision. Chin Autom Congress IEEE:371–337

  19. Luo Y, Qin J, Xiang X, Tan Y, Liu Q, Xiang L (2020) Coverless real-time image information hiding based on image block matching and dense convolutional network. J Real-Time Image Proc 17(1):125–135

    Article  Google Scholar 

  20. Martinez S, Cortes J, Bullo F (2007) Motion coordination with distributed information. IEEE Control Syst 27(4):75–88

    Article  Google Scholar 

  21. Morrison JG, Gavez-Lopez D, Sibley G (2016) Scalable multirobot localization and mapping with relative maps: introducing MOARSLAM. IEEE Control Syst 36(2):75–85

    Article  MathSciNet  MATH  Google Scholar 

  22. Ng KH, Che FY, Su ELM et al (2012) Adaptive phototransistor sensor for line finding. Procedia Eng 41(41):237–243

    Article  Google Scholar 

  23. Oriolo G, Ulivi G, Vendittelli M (1995) “On-line map building and navigation for autonomous mobile. Proceed 1995 IEEE Int Conf Robot Autom. 2900–2906.

  24. Oriolo G et al (2013) Vision-based trajectory control for humanoid navigation. Ieee-Ras Int Conf Humanoid Robots IEEE:118–123

  25. Quigley M, Conley K, Gerkey BP et al (2022) ROS: an open-source robot operating system// ICRA workshop on open source software

  26. Rincon JA, Costa A, Novais P et al (2016) Detecting social emotions with a NAO robot//advances in practical applications of scalable Mult-agent system. The PAAMS Collection Springer International Publishing

  27. Ryberg A, Chistiansson A, Eriksson E, Lennartson B (2006) A new Camera Model for Higher Accuracy Pose Calculations. IEEE Proceed Int Symp Indust Electron:2798–2802

  28. Wu JJ et al (2014) A real-time method for motion blur detection in visual navigation with a humanoid robot. Acta Automat Sin 40(2):267–276

    Google Scholar 

  29. Yichao S (2014) Humanoid robot control system design and attitude control method. Zhejiang University

    Google Scholar 

  30. Zhang J, Xie Z, Sun J, Zou X, Wang J (2020) A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8:29742–29754

    Article  Google Scholar 

Download references

Acknowledgements

The author deeply acknowledges Ms. Wang, Shujie initial test support at first rough model.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li-Hong Juang.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Juang, LH. Humanoid robot runs maze mode using depth-first traversal algorithm. Multimed Tools Appl 82, 11847–11871 (2023). https://doi.org/10.1007/s11042-022-13729-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-13729-8

Keywords

Navigation