Skip to main content
Log in

Two Efficient Visual Methods for Segment Self-localization

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigation is to find the segment in which the robot is placed (segment self-localization). Visual segment self-localization methods consist of two stages. In the first stage, a feature matching between the current image of the robot with all the images that form the visual path is done. Usually, in this stage, outliers removal methods such as RANSAC are used after matching to remove the outliers matched features. In the second stage, a segment is chosen depending on the results of the first one. Segment self-localization methods estimate a segment depending just on the percentage of the matched features. This leads to estimate the segment incorrectly in some cases. In this paper, another parameter also is considered to estimate the segment. The parameter is based on the perspective projection model. Moreover, instead of RANSAC which is a stochastic and time consuming method, a more straightforward and more effective method is proposed for outliers detection. The proposed methods are tested on Karlsruhe dataset, and acceptable results are obtained. Also, the methods are compared with three reviewed methods by Nguyen et al. (J Intell Robot Syst 84:217, 2016). Although the proposed methods use a more straightforward outlier method, they give more accurate results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Nguyen T, Mann GKI, Gosine RG, Vardy A. Appearance-based visual-teach-and-repeat navigation technique for micro aerial vehicle. J Intell Robot Syst. 2016;84:217.

    Article  Google Scholar 

  2. Mahadevaswamy UB, Keshava V, Lamani ACR, Abbur LP, Mahadeva S. Robotic mapping using autonomous vehicle. SN Comput Sci. 2020. https://doi.org/10.1007/s42979-020-00190-3.

    Article  Google Scholar 

  3. Xu L, Feng C, Kamat VR, Menassa CC. An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments. Autom Constr. 2019;104:230–45.

    Article  Google Scholar 

  4. Swedish T, Raskar R. Deep visual teach and repeat on path networks. In: IEEE Computer Society conference on computer vision and pattern recognition workshops, 2018.

  5. King P, Vardy A, Forrest AL. Teach-and-repeat path following for an autonomous underwater vehicle. J Field Robot. 2018;35:748–63.

    Article  Google Scholar 

  6. Guerrero JJ, Martinez-Cantin R, Sagüés C. Visual map-less navigation based on homographies. Syst J Robot. 2005;22:569–81.

    Article  Google Scholar 

  7. Chen Z, Birchfield ST. Qualitative vision-based path following. IEEE Trans Robot. 2009;25:749–54.

    Article  Google Scholar 

  8. Zhichao C, Birchfield ST. Qualitative vision-based mobile robot navigation. In: Proceedings—-IEEE international conference on robotics and automation, 2006.

  9. Nguyen T, Mann GKI, Gosine RG. Vision-based qualitative path-following control of quadrotor aerial vehicle. In: 2014 international conference on unmanned aircraft systems, ICUAS 2014—conference Proceedings; 2014.

  10. Toudeshki AG, Shamshirdar F, Vaughan R. Robust UAV visual teach and repeat using only sparse semantic object features. In: Proceedings—2018 15th conference on computer and robot vision, CRV 2018; 2018.

  11. Kassir MM, Palhang M, Ahmadzadeh MR. Qualitative vision-based navigation based on sloped funnel lane concept. Intel Serv Robot. 2020;13:235–50.

    Article  Google Scholar 

  12. Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD. There’s no place like home: visual teach and repeat for emergency return of multirotor UAVs during GPS failure. IEEE Robot Autom Lett. 2019;4(1):161–8.

    Article  Google Scholar 

  13. Kumar A, Gupta S, Fouhey D, Levine S, Malik J. Visual memory for robust path following. In: Advances in neural information processing systems, 2018–December, 2018.

  14. Vardy A. Using feature scale change for robot localization along a route. In: International conference on intelligent robots and systems (IROS); 2010. p. 4830–5.

  15. Erhard S, Wenzel KE, Zell A. Flyphone: visual selflocalisation using a mobile phone as onboard image processor on a quadrocopter. J Intell Robot Syst. 2009;57(1–4):451–65.

    Google Scholar 

  16. Majdik AL, Albers-Schoenberg Y, Scaramuzza D. MAV urban localization from Google street view data. In: International conference on intelligent robots and systems; 2013. p. 3979-86.

  17. Thrun S, Burgard W. Probabilistic robotics. Cambridge: MIT Press; 2005.

    MATH  Google Scholar 

  18. Garcia-Fidalgo E, Ortiz A. Vision-based topological mapping and localization methods: a survey. Robot Autono Syst. 2015;64(Supplement C):1–20.

    Article  Google Scholar 

  19. Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24(6):381–95.

    Article  MathSciNet  Google Scholar 

  20. Tomasi C. Detection and tracking of point features. Sch Comput Sci Carnegie Mellon Univ. 1991;91:1–22.

    Google Scholar 

  21. Dutta A, Mondal A, Dey N, et al. Vision tracking: a survey of the state-of-the-art. SN Comput Sci. 2020. https://doi.org/10.1007/s42979-019-0059-z.

    Article  Google Scholar 

  22. Dawson R. How significant is a boxplot outlier? J Stat Educ. 2011. https://doi.org/10.1080/10691898.2011.11889610.

    Article  Google Scholar 

  23. http://www.cvlibs.net/datasets/karlsruhe_sequences visted in 2019. Accessed 2021.

  24. Pronobis A, Caputo B. COLD: the CoSy localization database. Int J Robot Res. 2009;28(5):588–94.

    Article  Google Scholar 

  25. Smith M, Baldwin I, Churchill W, Paul R, Newman P. The new college vision and laser data set. Int J Robot Res. 2009;28(5):595–9.

    Article  Google Scholar 

  26. Zuliani M. RANSAC for dummies. Citeseer. 2008. https://scholar.google.com/scholar?as_q=RANSAC+for+Dummies%26as_occt=title&hl=en%26as_sdt=0%2C31.

Download references

Acknowledgements

The authors would like to thank Artificial Intelligence laboratory members for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamad Mahdi Kassir.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kassir, M.M., Palhang, M. & Ahmadzadeh, M.R. Two Efficient Visual Methods for Segment Self-localization. SN COMPUT. SCI. 2, 80 (2021). https://doi.org/10.1007/s42979-021-00492-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00492-0

Keywords

Navigation