Abstract
This paper focuses on the humanoid robot walking in the maze. In this research, we proposed the depth-first traversal algorithm for the maze searching with the single-view model and sonar obstacle avoidance theory then follow the “turn right first” principle to successfully avoid obstacles and efficiently walk out of the maze. The superiority of the proposed algorithm is that it can be for various complex mazes. In the three-dimensional maze, the visual system of the NAO robot was firstly used to perceive its surrounding environment, and then the image processing technology was used to identify the position of the surrounding obstacles. After that, the NAO robot can successfully avoid obstacles and walk out of the maze. Today, intelligent robots have a wide range of applications. In order to allow them to quickly integrate into our daily lives, they need to be able to recognize obstacles and walk freely like humans. This requires robots equipped with image processing technology which is able to help robots identify obstacles. During walking, robot will comply right turn in the first. We will use sonar to perceive the obstacles on the left and right sides. And at the turn image processing will be used probe obstacles at right and left. Finally, they will preserve memory of what they have been walked. The experimental results indicate that this method provides a reliable guarantee for the NAO robot to successfully avoid obstacles and get out of the maze.
Similar content being viewed by others
References
Aldana-Murillo NG, Hayet JB, Becerra HM (2015) Evaluation of Local Descriptors for Vision-Based Localization of Humanoid Robots." Mexican Conference on Pattern Recognition Springer International Publishing:179–189
Aldebaran Robotic. Nao Software 1.12.5 documentation. Only available online:www. aldebaran-robotics. com/documentation, 2022.
Alexiadis DS, Zarpalas D (2013) Real-time, full 3-D reconstruction of moving foreground objects from multiple consumer depth cameras. IEEE Trans Mult 15(2):339–358
Antonelli G, Chiaverini S, Fusco G (2007) A fuzzy-logic based approach for Mobile robot path tracking. IEEE Trans Fuzzy Syst 15(2):211–221
Armesto L, Tornero J (2009) “Automation of industrial vehicles: A vision based line tracking application,” Proc. IEEE Conf Emerg Technol Factory Autom. 1–7.
Bazylev D, Popchenko F, Ibraev D et al (2017) Humanoid robot walking on track using computer vision. Control Automation IEEE:1310–1315
Brandão AS, Martins FN, Soneguetti HB (2015) A vision-based line following strategy for an autonomous UAV. Int Conf Inform Control, Autom Robot IEEE:314–319
Chapuis R, Chapuis R (2010) Map aided localization and vehicle guidance using an active landmark search. Inform Fusion 11(3):283–296
Chen Y, Tao J, Liu L, Xiong J, Xia R, Xie J, Zhang Q, Yang K (2020) Saliency detection via improved hierarchical principle component analysis method, WCMC, 2020. Hindawi
De San Bernabe A, Dios MD, Ollero A (2016) Efficient Integration of RSSI for Tracking using Wireless Camera Networks. Inform Fusion 36
Delfin J, Becerra HM, Arechavaleta G (2014) Visual path following using a sequence of target images and smooth robot velocities for humanoid navigation. Ieee-Ras Int Conf Humanoid Robots IEEE:354–359
Du X, Tan KK, Htet KKK (2015) Vision-based lane line detection for autonomous vehicle navigation and guidance. Control Conf IEEE:1–5
Faragasso A et al (2016) Vision-based corridor navigation for humanoid robots. IEEE Int Conf Robotics Autom IEEE:3190–3195
Han S, Kim MS, Hong SP (2012) Open software platform for Robotic services. IEEE Trans Autom Sci Eng 9(3):467–481
Hornung A, Bennewitz M, Strasdat H (2010) Efficient vision-based navigation. Auton Robot 29(2):137–149
Lin H, Wang Y, Yi Z (2012) Dynamics analysis based on ADAMS manipulator. Manuf Autom 34(22):80–83
Lu Y, Song D (2017) Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Trans Robot 31(3):736–749
Luo B et al (2015) Research on mobile robot path tracking based on color vision. Chin Autom Congress IEEE:371–337
Luo Y, Qin J, Xiang X, Tan Y, Liu Q, Xiang L (2020) Coverless real-time image information hiding based on image block matching and dense convolutional network. J Real-Time Image Proc 17(1):125–135
Martinez S, Cortes J, Bullo F (2007) Motion coordination with distributed information. IEEE Control Syst 27(4):75–88
Morrison JG, Gavez-Lopez D, Sibley G (2016) Scalable multirobot localization and mapping with relative maps: introducing MOARSLAM. IEEE Control Syst 36(2):75–85
Ng KH, Che FY, Su ELM et al (2012) Adaptive phototransistor sensor for line finding. Procedia Eng 41(41):237–243
Oriolo G, Ulivi G, Vendittelli M (1995) “On-line map building and navigation for autonomous mobile. Proceed 1995 IEEE Int Conf Robot Autom. 2900–2906.
Oriolo G et al (2013) Vision-based trajectory control for humanoid navigation. Ieee-Ras Int Conf Humanoid Robots IEEE:118–123
Quigley M, Conley K, Gerkey BP et al (2022) ROS: an open-source robot operating system// ICRA workshop on open source software
Rincon JA, Costa A, Novais P et al (2016) Detecting social emotions with a NAO robot//advances in practical applications of scalable Mult-agent system. The PAAMS Collection Springer International Publishing
Ryberg A, Chistiansson A, Eriksson E, Lennartson B (2006) A new Camera Model for Higher Accuracy Pose Calculations. IEEE Proceed Int Symp Indust Electron:2798–2802
Wu JJ et al (2014) A real-time method for motion blur detection in visual navigation with a humanoid robot. Acta Automat Sin 40(2):267–276
Yichao S (2014) Humanoid robot control system design and attitude control method. Zhejiang University
Zhang J, Xie Z, Sun J, Zou X, Wang J (2020) A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8:29742–29754
Acknowledgements
The author deeply acknowledges Ms. Wang, Shujie initial test support at first rough model.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Juang, LH. Humanoid robot runs maze mode using depth-first traversal algorithm. Multimed Tools Appl 82, 11847–11871 (2023). https://doi.org/10.1007/s11042-022-13729-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-13729-8