Skip to main content

Fusing LIDAR and Vision for Autonomous Dirt Road Following

Incorporating a Visual Feature into the Tentacles Approach

  • Conference paper
Autonome Mobile Systeme 2009

Part of the book series: Informatik aktuell ((INFORMAT))

Abstract

In this paper we describe how visual features can be incorporated into the well known tentacles approach [1] which up to now has only used LIDAR and GPS data and was therefore limited to scenarios with significant obstacles or non-flat surfaces along roads. In addition we present a visual feature considering only color intensity which can be used to visually rate tentacles. The presented sensor fusion and color based feature were both applied with great success at the C-ELROB 2009 robotic competition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. von Hundeishausen, M. Himmelsbach, F. Hecker, A. Müller, and H.-J. Wünsche, “Driving with Tentacles — integral structures of sensing and motion,” International Journal of Field Robotics Research, 2008.

    Google Scholar 

  2. C. Rasmussen, “Combining laser range, color, and texture cues for autonomous road following,” IEEE Inter. Conf. on Robotics and Automation, 2002.

    Google Scholar 

  3. A. Broggi, S. Cattani, P. P. Porta, and P. Zani, “A laserscannervision fusion system implemented on the terramax autonomous vehicle,” IEEE Int. Conf. on Intelligent Robots and Systems, 2006.

    Google Scholar 

  4. U. Franke, H. Loose, and C. Knöppel, “Lane Recognition on Country Roads,” Intelligent Vehicles Symposium, pp. 99–104, 2007.

    Google Scholar 

  5. S. Thrun, M. Montemerlo, and A. Aron, “Probabilistic terrain analysis for highspeed desert driving,” in Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006.

    Google Scholar 

  6. C. Tan, T. Hong, T. Chang, and M. Shneier, “Color model-based real-time learning for road following,” IEEE Intelligent Transportation Systems Conference, 2006.

    Google Scholar 

  7. J. Zhang and H.-H. Nagel, “Texture-based segmentation of road images,” IEEE Symposium on Intelligent Vehicles, pp. 260–265, 1994.

    Google Scholar 

  8. E. D. Dickmanns and B. D. Mysliwetz, “Recursive 3-d road and relative ego-state recognition,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 14, pp. 199–213, February 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Manz, M., Himmelsbach, M., Luettel, T., Wuensche, HJ. (2009). Fusing LIDAR and Vision for Autonomous Dirt Road Following. In: Dillmann, R., Beyerer, J., Stiller, C., Zöllner, J.M., Gindele, T. (eds) Autonome Mobile Systeme 2009. Informatik aktuell. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10284-4_3

Download citation

Publish with us

Policies and ethics