Skip to main content

Advertisement

Log in

Learning traversability models for autonomous mobile vehicles

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene.

The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Albus, J. S., & Meystel, A. (2001). Engineering of mind: an introduction to the science of intelligent systems. Somerset: Wiley.

    Google Scholar 

  • Albus, J. S., Huang, H.-M., Messina, E., Murphy, K., Juberts, M., Lacaze, A., Balakirsky, S., Shneier, M. O., Hong, T., Scott, H., Horst, J., Proctor, F., Shackleford, W., Szabo, S., & Finkelstein, R. (2002). 4D/RCS Version 2.0: A reference model architecture for unmanned vehicle systems (NISTIR 6912). Gaithersburg, MD: National Institute of Standards and Technology.

  • Albus, J., Bostelman, R., Chang, T., Hong, T., Shackleford, W., & Shneier, M. (2006). Learning in a hierarchical control system: 4D/RCS in the DARPA LAGR program. Journal of Field Robotics, 23(11/12), 975–1003.

    Article  Google Scholar 

  • Chakravarty, S. (1999). Sample size determination for multinomial population. In National association for welfare research and statistics 39th annual workshop, Cleveland, Ohio. http://www.nawrs.org/ClevelandPDF/papers/Page_2x.html.

  • Chang, T., Hong, T., Legowik, S., & Abrams, M. (1999). Concealment and obstacle detection for autonomous driving. In Proceedings of the robotics & applications conference (pp. 147–152). Santa Barbara, CA.

  • DeSouza, G. N., & Kak, A. C. (2002). Vision for mobile robot navigation: a survey. IEEE Transaction on Pattern Analysis and Machine Intelligence, 24(2), 237–267.

    Article  Google Scholar 

  • Hadsell, R., Sermanet, P., Ben, J., Han, J., Chopra, S., Ranzato, M., Sulsky, Y., Flepp, B., Muller, U., & LeCun, Y. (2006). On-line learning of long-range obstacle detection for off-road robots. In The learning workshop, Snowbird, UT.

  • Howard, A., Tunstel, E., Edwards, D., & Carlson, A. (2001). Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversability. In Joint 9th IFSA world congress and 20th NAFIPS international conference (pp. 7–12).

  • Jackel, L. D., Krotkov, E., Perschbacher, M., Pippine, J., & Sullivan, C. (2006). The DARPA LAGR program: goals, challenges, methodology and phase I results. Journal of Field Robotics, 23(11/12), 945–973.

    Article  Google Scholar 

  • Kwong, W. A., & Passino, K. M. (1996). Dynamically focused fuzzy learning control, Part B. IEEE Transactions on Systems, Man and Cybernetics, 26(1), 53–74.

    Article  Google Scholar 

  • Ojala, T., Pietikainen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on feature distributions. Pattern Recognition, 29, 51–59.

    Article  Google Scholar 

  • Pietikainen, M., Nieminen, S., Marszalec, E., & Ojala, T. (1996). Accurate color discrimination with classification based on feature distributions. In 13th international conference on pattern recognition (ICPR’96) (Vol. 3, pp. 833–838).

  • Puzicha, J., Hofmann, T., & Buhmann, J.M. (1997). Non-parametric similarity measures for unsupervised texture segmentation and image retrieval. In IEEE computer society conference on computer vision and pattern recognition (CVPR’97) (pp. 267–272). San Juan, Puerto Rico.

  • Shirkhodaie, A., Amrani, R., Chawla, N., & Vicks, T. (2004). Traversable terrain modeling and performance measurement of mobile robots. In Performance metrics for intelligent systems, PerMIS’04, Gaithersburg, MD.

  • Shneier, M., Shackleford, W., Hong, T., & Chang, T. (2006). Performance evaluation of a terrain traversability learning algorithm in the DARPA LAGR program. In Performance metrics for intelligent systems, PerMIS 2006.

  • Swain, M. J., & Ballard, D. H. (1991). Color indexing. International Journal of Computer Vision, 7(1), 11–32.

    Article  Google Scholar 

  • Talukder, A., Manduchi, R., Castano, R., Matthies, L., Castano, A., & Hogg, R. (2002). Autonomous terrain characterisation and modelling for dynamic control of unmanned vehicles. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 708–713).

  • Tan, C., Hong, T., Shneier, M., & Chang, T. (2006). Color model-based real-time learning for road following. In IEEE intelligent transportation systems conference (ITSC’06) (pp. 939–944). Toronto, Canada.

  • Ulrich, I., & Nourbakhsh, I. (2000a). Appearance-based obstacle detection with monocular color vision. In Proceedings of the AAAI national conference on artificial intelligence. Austin, TX.

  • Ulrich, I., & Nourbakhsh, I. (2000b). Appearance-based place recognition for topological localization. In IEEE international conference on robotics and automation (pp. 1023–1029). San Francisco, CA.

  • Wellington, C., & Stentz, A. (2003). Learning predictions of the load-bearing surface for autonomous rough-terrain navigation in vegetation. In International conference on field and service robotics (pp. 49–54).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Shneier.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shneier, M., Chang, T., Hong, T. et al. Learning traversability models for autonomous mobile vehicles. Auton Robot 24, 69–86 (2008). https://doi.org/10.1007/s10514-007-9063-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-007-9063-6

Keywords

Navigation