Skip to main content

Performance Comparison of Visual Teach and Repeat Systems for Mobile Robots

  • Conference paper
  • First Online:
Modelling and Simulation for Autonomous Systems (MESAS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13866))

  • 419 Accesses

Abstract

In practical work scenarios, it is often necessary to repeat specific tasks, which include navigating along a desired path. Visual teach and repeat systems are a type of autonomous navigation in which a robot repeats a previously taught path using a camera and dead reckoning. There have been many different teach and repeat methods proposed in the literature, but only a few are open-source. In this paper, we compare four recently published open-source methods and a Boston Dynamics proprietary solution embedded in a Spot robot. The intended use for each method is different, which has an impact on their strengths and weaknesses. When deciding which method to use, factors such as the environment and desired precision and speed should be taken into consideration. For example, in controlled artificial environments, which do not change significantly, navigation precision and speed are more important than robustness to environment variations. However, the appearance of unstructured natural environments varies over time, making robustness to changes a crucial property for outdoor navigation systems. This paper compares the speed, precision, reliability, robustness, and practicality of the available teach and repeat methods. We will outline their flaws and strengths, helping to choose the most suitable method for a particular utilization.

This research was funded by Czech Science Foundation research project number 20-27034J ‘ToltaTempo’.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bürki, M., Dymczyk, M., Gilitschenski, I., Cadena, C., Siegwart, R., Nieto, J.: Map management for efficient long-term visual localization in outdoor environments. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 682–688. IEEE (2018)

    Google Scholar 

  2. Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Rob. 32(6), 1309–1332 (2016)

    Article  Google Scholar 

  3. Chaumette, F., Hutchinson, S.: Visual servo control, part I: Basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006). http://www.irisa.fr/lagadic/publi/publi/Chaumette07a-eng.html

  4. Chen, Z., et al.: Deep learning features at scale for visual place recognition. In: 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017)

    Google Scholar 

  5. Chen, Z., Birchfield, S.T.: Qualitative vision-based mobile robot navigation. In: Proceedings 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, pp. 2686–2692. IEEE (2006)

    Google Scholar 

  6. Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Rob. 25(3), 749–754 (2009)

    Article  Google Scholar 

  7. Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR (2013). https://doi.org/10.1177/0278364913499193

  8. Čížek, P., Faigl, J.: Real-time FPGA-based detection of speeded-up robust features using separable convolution. IEEE Trans. Industr. Inf. 14(3), 1155–1163 (2017)

    Article  Google Scholar 

  9. Dall’Osto, D., Fischer, T.: FRB github. https://github.com/QVPR/teach-repeat/

  10. Dall’Osto, D., Fischer, T., Milford, M.: Fast and robust bio-inspired teach and repeat navigation. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 500–507 (2021). https://doi.org/10.1109/IROS51168.2021.9636334

  11. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera slam. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)

    Article  Google Scholar 

  12. Dayoub, F., Duckett, T.: An adaptive appearance-based map for long-term topological localization of mobile robots. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3364–3369. IEEE (2008)

    Google Scholar 

  13. Engelhard, N., Endres, F., Hess, J., Sturm, J., Burgard, W.: Real-time 3D visual SLAM with a hand-held RGB-D camera. In: Proceedings of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, vol. 180, pp. 1–15 (2011)

    Google Scholar 

  14. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010)

    Article  Google Scholar 

  15. Halodová, L., et al.: Adaptive image processing methods for outdoor autonomous vehicles. In: Mazal, J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 456–476. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14984-0_34

    Chapter  Google Scholar 

  16. Halodová, L., et al.: Predictive and adaptive maps for long-term visual navigation in changing environments. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7033–7039. IEEE (2019)

    Google Scholar 

  17. Hawes, N., et al.: The strands project: long-term autonomy in everyday environments. IEEE Robot. Autom. Mag. 24(3), 146–156 (2017)

    Article  MathSciNet  Google Scholar 

  18. Khairuddin, A.R., Talib, M.S., Haron, H.: Review on simultaneous localization and mapping (SLAM). In: 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 85–90. IEEE (2015)

    Google Scholar 

  19. Krajník, T., Blažíček, J., Santos, J.M.: Visual road following using intrinsic images. In: 2015 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)

    Google Scholar 

  20. Krajník, T., Broughton, G., Rouček, Tomáš Rozsypálek, Z.: BearNav2 github. https://github.com/broughtong/bearnav2

  21. Krajník, T., Cristóforis, P., Kusumam, K., Neubert, P., Duckett, T.: Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 88, 127–141 (2016)

    Article  Google Scholar 

  22. Krajnik, T., Fentanes, J.P., Cielniak, G., Dondrup, C., Duckett, T.: Spectral analysis for long-term robotic mapping. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3706–3711. IEEE (2014)

    Google Scholar 

  23. Krajník, T., Filip, M., Broughton, G., Rouček, Tomáš Rozsypálek, Z.: BearNav github. https://github.com/gestom/stroll_bearnav/tree/core

  24. Krajník, T., Přeučil, L.: A simple visual navigation system with convergence property. In: Bruyninckx, H., Přeučil, L., Kulich, M. (eds.) European Robotics Symposium 2008. Springer Tracts in Advanced Robotics, vol. 44, pp. 283–292. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78317-6_29

    Chapter  Google Scholar 

  25. Krajník, T., Faigl, J., Vonásek, V., Košnar, K., Kulich, M., Přeučil, L.: Simple yet stable bearing-only navigation. J. Field Robot. 27(5), 511–533 (2010). https://doi.org/10.1002/rob.20354, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.20354

  26. Krajník, T., Majer, F., Halodová, L., Vintr, T.: Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1657–1664 (2018). https://doi.org/10.1109/IROS.2018.8593803

  27. Linegar, C., Churchill, W., Newman, P.: Work smart, not hard: recalling relevant experiences for vast-scale but time-constrained localisation. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 90–97. IEEE (2015)

    Google Scholar 

  28. Lowry, S., Milford, M.J.: Supervised and unsupervised linear learning techniques for visual place recognition in changing environments. IEEE Trans. Rob. 32(3), 600–613 (2016)

    Article  Google Scholar 

  29. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2015)

    Article  MathSciNet  Google Scholar 

  30. Lowry, S., Wyeth, G., Milford, M.: Unsupervised online learning of condition-invariant images for place recognition. In: Proceedings of the Australasian Conference on Robotics and Automation. Citeseer (2014)

    Google Scholar 

  31. Macario Barros, A., Michel, M., Moline, Y., Corre, G., Carrel, F.: A comprehensive survey of visual slam algorithms. Robotics 11(1), 24 (2022)

    Article  Google Scholar 

  32. Majer, F., et al.: A versatile visual navigation system for autonomous vehicles. In: Mazal, J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 90–110. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14984-0_8

    Chapter  Google Scholar 

  33. Matias, L.P., Santos, T.C., Wolf, D.F., Souza, J.R.: Path planning and autonomous navigation using AMCL and AD. In: 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), pp. 320–324. IEEE (2015)

    Google Scholar 

  34. Mühlfellner, P., Bürki, M., Bosse, M., Derendarz, W., Philippsen, R., Furgale, P.: Summary maps for lifelong visual localization. J. Field Robot. 33(5), 561–590 (2016)

    Article  Google Scholar 

  35. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: Orb-SLAM: a versatile and accurate monocular slam system. IEEE Trans. Rob. 31(5), 1147–1163 (2015). https://doi.org/10.1109/TRO.2015.2463671

    Article  Google Scholar 

  36. Neubert, P., Sünderhauf, N., Protzel, P.: Superpixel-based appearance change prediction for long-term navigation across seasons. RAS 69, 15–27 (2014). https://doi.org/10.1016/j.robot.2014.08.005

    Article  Google Scholar 

  37. Paton, M., MacTavish, K., Ostafew, C., Barfoot, T.: It’s not easy seeing green: lighting-resistant stereo visual teach-and-repeat using color-constant images. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2015)

    Google Scholar 

  38. Paton, M., MacTavish, K., Berczi, L.-P., van Es, S.K., Barfoot, T.D.: I can see for miles and miles: an extended field test of visual teach and repeat 2.0. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 415–431. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67361-5_27

    Chapter  Google Scholar 

  39. Paz, L.M., Piniés, P., Tardós, J.D., Neira, J.: Large-scale 6-DoF slam with stereo-in-hand. IEEE Trans. Rob. 24(5), 946–957 (2008)

    Article  Google Scholar 

  40. Rosen, D.M., Mason, J., Leonard, J.J.: Towards lifelong feature-based mapping in semi-static environments. In: ICRA, pp. 1063–1070. IEEE (2016)

    Google Scholar 

  41. Rouček, T., et al.: Self-supervised robust feature matching pipeline for teach and repeat navigation. Sensors 22(8), 2836 (2022)

    Article  Google Scholar 

  42. Rouček, T., et al.: DARPA subterranean challenge: multi-robotic exploration of underground environments. In: Mazal, J., Fagiolini, A., Vasik, P. (eds.) MESAS 2019. LNCS, vol. 11995, pp. 274–290. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43890-6_22

    Chapter  Google Scholar 

  43. Rozsypálek, Z., et al.: Contrastive learning for image registration in visual teach and repeat navigation. Sensors 22, 2975 (2022)

    Article  Google Scholar 

  44. Rozsypálek, Z., Rouček, T., Vintr, T., Krajník, T.: Non-cartesian multidimensional particle filter for long-term visual teach and repeat in changing environments. IEEE Robot. Autom. Lett. (2023, to appear)

    Google Scholar 

  45. Sledevič, T., Serackis, A.: Surf algorithm implementation on FPGA. In: 2012 13th Biennial Baltic Electronics Conference, pp. 291–294. IEEE (2012)

    Google Scholar 

  46. Sun, L., Yan, Z., Zaganidis, A., Zhao, C., Duckett, T.: Recurrent-OctoMap: learning state-based map refinement for long-term semantic mapping with 3-D-lidar data. IEEE Robot. Autom. Lett. 3(4), 3749–3756 (2018)

    Article  Google Scholar 

  47. Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., Milford, M.: On the performance of convnet features for place recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4297–4304. IEEE (2015)

    Google Scholar 

  48. Valgren, C., Lilienthal, A.J.: SIFT, SURF & seasons: appearance-based long-term localization in outdoor environments. Robot. Auton. Syst. 58(2), 149–156 (2010)

    Article  Google Scholar 

  49. Zhang, N., Warren, M., Barfoot, T.D.: Learning place-and-time-dependent binary descriptors for long-term visual localization. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 828–835. IEEE (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxim Simon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Simon, M., Broughton, G., Rouček, T., Rozsypálek, Z., Krajník, T. (2023). Performance Comparison of Visual Teach and Repeat Systems for Mobile Robots. In: Mazal, J., et al. Modelling and Simulation for Autonomous Systems. MESAS 2022. Lecture Notes in Computer Science, vol 13866. Springer, Cham. https://doi.org/10.1007/978-3-031-31268-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31268-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31267-0

  • Online ISBN: 978-3-031-31268-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics