Skip to main content

I Can See for Miles and Miles: An Extended Field Test of Visual Teach and Repeat 2.0

  • Conference paper
  • First Online:
Field and Service Robotics

Abstract

Autonomous path-following systems based on the Teach and Repeat paradigm allow robots to traverse extensive networks of manually driven paths using on-board sensors. These methods are well suited for applications that involve repeated traversals of constrained paths such as factory floors, orchards, and mines. In order for path-following systems to be viable for these applications they must be able to navigate large distances over long time periods, a challenging task for vision-based systems that are susceptible to appearance change. This paper details Visual Teach and Repeat 2.0, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments. These tasks are achieved through the use of a suite of novel, multi-experience, vision-based navigation algorithms. We have validated our system experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada, where we incrementally built and autonomously traversed a 5 Km network of paths. Over the span of the field test, the robot logged over 140 Km of autonomous driving with an autonomy rate of 99.6%, despite experiencing significant appearance change due to lighting and weather, including driving at night using headlights.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use SURF features triangulated from greyscale and color-constant stereo measurements in our implementation, but the overall system is generic to any point-based, sparse visual feature.

References

  1. Anderson S., Barfoot, T.D.: Full steam ahead: Exactly sparse gaussian process regression for batch continuous-time trajectory estimation on se(3). In: IROS (2015)

    Google Scholar 

  2. Berczi, L.-P., Barfoot, T.D.: It’s like déjà vu all over again: Learning place-dependent terrain assessment for visual teach and repeat. In: IROS (2016)

    Google Scholar 

  3. Berczi, L.-P., Posner, I., Barfoot, T.D.: Learning to assess terrain from human demonstration using an introspective gaussian-process classifier. In: ICRA (2015)

    Google Scholar 

  4. Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Robot. 25(3), 749–754 (2009)

    Article  Google Scholar 

  5. Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR 32(14), 1645–1661 (2013)

    Google Scholar 

  6. Clement, L., Kelly, J., Barfoot, T.D.: Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. JFR 34(1), 74–97 (2017)

    Google Scholar 

  7. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. JFR 27(5), 534–560 (2010)

    Google Scholar 

  8. Goldberg, S.B., Maimone, M.W., Matthies, L.: Stereo vision and rover navigation software for planetary exploration. In IEEE Aerospace Conference Proceedings (2002)

    Google Scholar 

  9. Jackel, L.D., Krotkov, E., Perschbacher, M., Pippine, J., Sullivan, C.: The DARPA LAGR program: Goals, challenges, methodology, and phase I results. JFR 23(11–12), 945–973 (2006)

    Google Scholar 

  10. Krajnk, T., Faigl, J., Vonsek, V., Konar, K., Kulich, M., Peuil, L.: Simple yet stable bearing-only navigation. JFR 27(5), 511–533 (2010)

    Google Scholar 

  11. Krüsi, P., Bücheler, B., Pomerleau, F., Schwesinger, U., Siegwart, R., Furgale P.: Lighting-Invariant Adaptive Route Following Using ICP. JFR (2014)

    Google Scholar 

  12. Linegar, C., Churchill, W., Newman, P.: Work Smart. Recalling relevant experiences for vast-scale but time-constrained localisation. In: ICRA, Not Hard (2015)

    Google Scholar 

  13. MacTavish, K., Barfoot, T.D.: Towards hierarchical place recognition for long-term autonomy. In: ICRA Workshop (2014)

    Google Scholar 

  14. MacTavish, K., Paton, M., Barfoot, T.D.: Visual triage: a bag-of-words experience selector for long-term visual route following. In: ICRA (2017)

    Google Scholar 

  15. McManus, C., Furgale, P., Stenning, B., Barfoot, T.D.: Visual teach and repeat using appearance-based lidar. In: ICRA (2012)

    Google Scholar 

  16. Mhlfellner, P., Brki, M., Bosse, M., Derendarz, W., Philippsen, R., Furgale, P.: Summary maps for lifelong visual localization. JFR (2015)

    Google Scholar 

  17. Ostafew, C., Schoellig, A.P., Barfoot, T.D., Collier, J.: Learning-based nonlinear model predictive control to improve vision-based mobile robot path tracking. JFR 33(1), 133–152 (2016)

    Google Scholar 

  18. Paton, M., MacTavish, K., Ostafew, C., Pomerleau, F., Barfoot, T.D.: Expanding the limits of vision-based localization for long-term route following autonomy. JFR 34(1), 98–122 (2017)

    Google Scholar 

  19. Paton, M., MacTavish, K., Warren, M., Barfoot, T.D.: Bridging the appearance gap: Multi-experience localization for long-term visual teach & repeat. In: IROS (2016)

    Google Scholar 

  20. Sibley, G., Mei, C., Reid, I., Newman, P.: Adaptive relative bundle adjustment. In: RSS (2009)

    Google Scholar 

  21. van Es, K., Barfoot, T.D.: Being in two places at once: Smooth visual path following on globally inconsistent pose graphs. In: CRV (2015)

    Google Scholar 

Download references

Acknowledgements

This work was supported financially and in-kind by Clearpath Robotics and the Natural Sciences and Engineering Research Council (NSERC) through the NSERC Canadian Field Robotics Network (NCFRN). The authors would like to also extended their deepest thanks to Ethier Sand and Gravel for allowing us to conduct our field test at their site.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Paton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paton, M., MacTavish, K., Berczi, LP., van Es, S.K., Barfoot, T.D. (2018). I Can See for Miles and Miles: An Extended Field Test of Visual Teach and Repeat 2.0. In: Hutter, M., Siegwart, R. (eds) Field and Service Robotics. Springer Proceedings in Advanced Robotics, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-319-67361-5_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67361-5_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67360-8

  • Online ISBN: 978-3-319-67361-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics