Abstract
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Hayashi E (2008) Navigation system for an autonomous robot using an ocellus camera in an indoor environment. Artif Life Robotics 12:346–352
Kinoshita T, Hayashi E (2009) Development of distance recognition using an ocellus camera for an autonomous personal robot. Artif Life Robotics 13:346–349
Hayashi E (2007) Navigation system with a self-drive control for an autonomous robot in an indoor environment. 16th International Symposium on Robot and Human Interactive Communication (RO-MAN 2007), p 6 in CD-ROM
Kriegman DJ, Triendl E, Binford TO (1989) Stereo vision and navigation in buildings for mobile robots. IEEE Trans Robotics Autom 5:792–803
Belker T, Schulz D (2002) Local action planning for mobile robot collision avoidance. IEEE/RSJ International Conference on Intelligent Robots and Systems’ 02 IROS, pp 601–606
Komoriya K, Oyama E, Tani K (1992) Planning of landmark measurement for the navigation of a mobile robot. IEEE/RSJ International Conference on Intelligent Robot and Systems, pp 1476–1481
Umeno T, Hayashi E (2006) Navigation system for an autonomous robot using an ocellus camera in an indoor environment. Proceedings of the 11th International Symposium on Artificial Life and Robotics, vol 1, Beppu, Oita, Japan, p 229
Kinoshita T, Hayashi E (2008) Development of distance recognition using an ocellus camera for an autonomous personal robot. Artificial Life and Robotics (AROB 13th’ 08), p 4 (CD-ROM)
Ballard DH (1981) Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition 13:111–122
Arkin RC (1989) Motor schema-based mobile robot navigation. Int J Robotics Res, August 1989, pp 92–112
Brooks RA (1986) A robust layered control system for a mobile robot. IEEE J Robotics Autom RA-2(1):14–23
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was presented in part at the 14th International Symposium on Artificial Life and Robotics, Oita, Japan, February 5–7, 2009
About this article
Cite this article
Hayashi, E., Kinoshita, T. Development of an indoor navigation system for a monocular-vision-based autonomous mobile robot. Artif Life Robotics 14, 324–328 (2009). https://doi.org/10.1007/s10015-009-0671-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-009-0671-4