Abstract
Autonomous and mobile robots are being expected to provide various services in human living environments. However, many problems remain to be solved in the development of autonomous robots that can work like humans. When a robot moves, it is important that it be able to have self-localization abilities and recognize obstacles. For a human, the present location can be correctly checked through a comparison between memorized information assuming, it is correct, and the present situation. In addition, the distance to an object and the perception of its size can be estimated by a sense of distance based on memory or experience. Therefore, the environment for robotic activity assumed in this study was a finite-space such as a family room, an office, or a hospital room. Because an accurate estimation of position is important to the success of a robot, we have developed a navigation system with self-localization ability which uses only a CCD camera that can detect whether the robot is moving accurately in a room or corridor. This article describes how this system has been implemented and tested with our developed robot.
Similar content being viewed by others
References
Moravec HP (1983) The Stanford cart and the CMU Rover. Proc IEEE 71:872–884
Cox IJ, Wilfong GT (1990) The Stanford cart and the CMU rover in autonomous robot vehicles. Springer-Verlag New York, U.S.A. pp 407–419
Miura J, Shirai Y (2003) Vision for mobile robots considering uncertainties and its planning. Trans Information Process Soc Jpn Comput Vision Image Media 44:(SIG 17) (CVIM 8) pp 37–50
Hayashi E, Ikeda K (2005) Development of an autonomous personal robot: the visual processing system for autonomous driving. In: Sugisaka M, Tanaka H (eds) Proceedings of the 10th International Symposium on Artificial Life and Robotics (AROB10), Beppu, Oita, Japan, Feb. 4–6, 2005, p 4 in CD-ROM
Umeno T, Hayashi E (2005) Development of an autonomous personal robot: the visual processing system for autonomous driving. In: Sugisaka M, Tanaka H (eds) Proceedings of the 10th International Symposium on Artificial Life and Robotics (AROB10), Beppu, Oita, Japan, Feb. 2005, p 4 in CD-ROM
Umeno T, Ikeda K, Hayashi E (2005) The self-driving control for robot by using the dead-reckoning system and the visual processing system. 36th International Symposium on Robotics, Japan Robot Association, p 6 in CD-ROM, Japan
Umeno T, Hayashi E (2006) Navigation system for an autonomous robot using an ocellus camera in indoor environment. In: Sugisaka M, Tanaka H (eds) Proceedings of the 11th International Symposium on Artificial Life and Robotics (AROB11), Beppu, Oita, Japan, Jan. 2006, p 4 in CD-ROM
Harris C, Stephens M (1988) A combined corner and edge detector. Proceedings of the 4th Alvey Vision Conference, Manchester, U.K., Aug. 1988, pp 147–151
Moravec HP (1977) Towards automatic visual obstacle avoidance. Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, Cambridge, Mass., p 584, Aug. 1977
Tomasi C, Kanade T (1991) Detection and tracking of point features. CMU Tech Rep CMU-CS-91-132, Apr. 1991, p 20
Smith SM, Brady JM (1997) SUSAN. A new approach to low level image processing. Int J Comput Vision 23:45–78
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Hayashi, E. Navigation system for an autonomous robot using an ocellus camera in an indoor environment. Artif Life Robotics 12, 346–352 (2008). https://doi.org/10.1007/s10015-007-0488-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-007-0488-y