Skip to main content
Log in

Visual Recognition of Workspace Landmarks for Topological Navigation

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

In this work, robot navigation is approached using visual landmarks. Landmarks are not preselected or otherwise defined a priori; they are extracted automatically during a learning phase. To facilitate this, a saliency map is constructed on the basis of which potential landmarks are highlighted. This is used in conjunction with a model-driven segregation of the workspace to further delineate search areas for landmarks in the environment. For the sake of robustness, no semantic information is attached to the landmarks; they are stored as raw patterns, along with information readily available from the workspace segregation. This subsequently facilitates their accurate recognition during a navigation session, when similar steps are employed to locate landmarks, as in the learning phase. The stored information is used to transform a previously learned landmark pattern, according to the current position of the observer, thus achieving accurate landmark recognition. Results obtained using this approach demonstrate its validity and applicability in indoor workspaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aloimonos, Y. 1990. Purposive and qualitative active vision. In DARPA Image Understanding Workshop, pp. 816-828.

  • Andersen, C.S. 1996. A Framework for Control of a Camera Head, Ph.D. Thesis, Laboratory of Image Analysis, Aalborg University, Denmark.

    Google Scholar 

  • Bouguet, J.Y. and Perona, P. 1955. Visual navigation using a single camera. In Proc. Intl. Conf. on Computer Vision, pp. 645-652.

  • Cassinis, R., Grana, D., and Rizzi, A. 1996. Self-localization using an omni-directional image sensor. In 4th Intl. Symposium on Intelligent Robotic Systems, Lisbon, Portugal, pp. 215-222.

  • Clark, J. and Ferrier, N. 1993. Attentive visual servoing. In Active Vision, A.Y.A. Blake (Ed.), Artificial Intelligence, MIT Press: Cambridge, MA, chap. 9, pp. 137-154.

    Google Scholar 

  • Cox, I. and Leonard, J. 1994. Modeling a dynamic environment using a multiple hypothesis approach. Artificial Intell., 66:311-344.

    Google Scholar 

  • Durrant-Whyte, H. and Leonard, J. 1989. Navigation by correlating geometric sensor data. In IEEE Int. Workshop on Intelligent Robots and Systems, IROS-89.

  • Greiner, R. and Isukapalli, R. 1996. Learning to select usefull landmarks. IEEE Trans. Systems, Man, Cybern.—Part B: Cybernetics, 26(3):437-449.

    Google Scholar 

  • Hwang, Y.K. and Ahuja, N. 1992. Gross motion planning—a survey. ACM Computing Surveys, 24(3):221-291.

    Google Scholar 

  • Illingworth, J. and Kittler, J. 1987. The Adaptive Hough Transform, IEEE Trans. Pattern Anal. Mach. Intell., 9(5):690-698.

    Google Scholar 

  • Koch, C. and Ullman, S. 1984. Selecting one among the many: A simple network implementing shifts in selective visual attention. Technical Report, MIT AI Laboratory.

  • Koch, C. and Ullman, S. 1985. Shifts in selective visual attention: Towards the underlying neural circuitry, Hum. Neurobiol., 4:219-227.

    Google Scholar 

  • Kosaka, A. and Pan, J. 1995. Purdue experiments in model-based vision for hallway navigation. In Workshop on Vision for Robots in IROS'95, pp. 87-96.

  • Kuhnert, K.-D. 1990. Fusing dynamic vision and landmark navigation for autonomous driving. In IEEE Int. Workshop on Intelligent Robots and Systems, IROS '90, pp.113-119.

  • Latombe, J.C. 1991. Robot Motion Planning, Kluwer Academic Publishers: Boston, MA.

    Google Scholar 

  • Lazanas, A. and Latombe, J.-C. 1992. Landmark-based robot navigation. In 10th National Conference on Artificial Intelligence, San Jose, CA, pp. 697-702.

  • Leonard, J. and Durrant-Whyte, H. 1991. Mobile robot localization by tracking geometric beacons, IEEE Trans. Robotics and Autom., 7(3):376-382.

    Google Scholar 

  • Leonard, J., Cox, I. and Durrant-Whyte, H. 1990. Dynamic map building for an autonomous mobile robot. In IEEE Int. Workshop on Intelligent Robots and Systems, IROS-90.

  • Levitt, T. and Lawton, D. 1990. Qualitative navigation for mobile robots, Artificial Intell., 44:305-360.

    Google Scholar 

  • Madsen, C.B., Andersen, C.S., and Sorensen, J.S. 1997. A robustness analysis of triangulation based robot self-positioning. In 5th Intl. Symposium on Intelligent Robotic Systems, Stockholm, Sweden, pp. 195-204.

  • Magee, M. and Aggarwal, J.K. 1995. Robot self-location using visual reasoning relative to a single target object, Pattern Recognition, 28(2):125-134.

    Google Scholar 

  • Milanese, R. 1993. Detecting salient regions in an image: From biological evidence to computer implementation. Ph.D. Thesis, Department of Computer Science, University of Geneva, Switzerland.

    Google Scholar 

  • Milanese, R., Wechsler, H., Gil, S., J.-M. Bost and Pun, T. 1994. Integration of bottom-up and top-down cues for visual attention using non-linear relaxation. In IEEE Conf. on Comp. Vision and Pattern Rec.

  • Nasr, H. and Bhanu, B. 1988. Landmark recognition for autonomous mobile robots. In IEEE Intl. Conf. on Robotics and Autom., pp. 1218-1223.

  • Nelson, R.C. 1988. Visual navigation. Ph.D. Dissertation, University of Maryland.

  • Nielsen, J. and Sandini, G. 1994. Learning mobile robot navigation. In IEEE Conf. on Systems, Man and Cybernetics, San Antonio, TX

  • Pahlavan, K., Uhlin, T., and Eklundh, J.O. 1993. Active vision as a methodology. In Active Perception, chap. 1, Y. Aloimonos (Ed.), Lawrence Erlbaum Associates.

  • Rousseuw, P.J. 1987. Least median of squares regression, J. American Stat. Ass. 79:871-880.

    Google Scholar 

  • Schiele, B. and Crowley, J.L. 1996. Where to look next and what to look for. In IEEE/RSJ Intl. Conf. on Intell. Robotics and Syst. (IROS'96), pp. 1249-1255.

  • Shah, S. and Aggarwal, J.K. 1995. Modeling structured environments using robot vision. In 1995 Asian Conf. on Computer Vision.

  • Sugihara, K. 1988. Some location problems for robot navigation using a single camera, Computer Vision, Graphics, Image Proc., 42:112-129.

    Google Scholar 

  • Taylor, C.J. and Kriegman, D.J. 1994. Vision-based motion planning and exploration algorithms for mobile robots. In Workshop on the Algorithmic Foundations of Robotics.

  • Trahanias, P.E., Velissaris, S., and Garavelos, T. 1997. Visual landmark extraction and recognition for autonomous robot navigation. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS'97, Grenoble, France.

  • Yeh, E. and Kriegman, D.J. 1995. Toward selecting and recognizing natural landmarks. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS'95), Pittsburgh, PA.

  • Zheng, J.Y. and Tsuji, S. 1989. Spatial representation and analysis of temporal visual events. In IEEE Intl. Conf. on Image Proc., pp. 775-779.

  • Zheng, J.Y., Barth, M., and Tsuji, S. 1991. Autonomous landmark selection for route recognition by a mobile robot. In 1991 IEEE Intl. Conf. on Robotics and Automation, pp. 2004-2009.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Trahanias, P.E., Velissaris, S. & Orphanoudakis, S.C. Visual Recognition of Workspace Landmarks for Topological Navigation. Autonomous Robots 7, 143–158 (1999). https://doi.org/10.1023/A:1008910100968

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008910100968

Navigation