ABSTRACT
In The mapping system of current libraries is based mainly on the inertial navigation and binocular visual navigation technology. Due to the limited mapping accuracy, however, such a system cannot meet the high precision navigation, positioning and book information recognition requirements of the library automatic access robots. To improve the robot system locating accuracy, we propose a QR code image processing mechanism for library robot positioning and navigation system based on the visual technology. Specifically, with the binocular cameras that are fixed on robots to quickly establish the three-dimensional coordinates of the QR code, such a mechanism tries to extract the Haar features of the QR code images and identify the target area quickly with the Adaboost algorithm, and then it controls the robot arms with code reader to access the accurate information of the QR codes, and finally the realize accurate book information reading and navigation accuracy improvement functions with an information correction function. Experiments are also conducted to verify the effectiveness of the proposed mechanism.
- Borenstein J. and Feng L. 1996. Gyrodometry: a new method for combining data from gyros and odometry in mobile robots. In Proceedings of IEEE International Conference on Robotics and Automation 1 (1996) 423--428.Google ScholarCross Ref
- Li F. Y., Jia X. D., and Dong M. 2016. Development of vision/inertial integrated navigation in different application scenarios. Journal of Navigation & Positioning (2016).Google Scholar
- Meng Q. and Lee M. H. 2006. Design issues for assistive robotics for the elderly. Advanced Engineering Informatics 20, 2 (2006), 171--186. Google ScholarDigital Library
- Pandey A. 2017. Mobile Robot Navigation and Obstacle Avoidance Techniques: A Review. International Journal of Robotics & Automation 2, 3 (2017), 1--12.Google Scholar
- Scaramuzza D. and Fraundorfer F. 2011. Visual Odometry: Part I: The First 30 Years and Fundamentals. IEEE Robotics & Automation Magazine (2011).Google Scholar
- Wang J. M., Wang X., and Wang S. B. 2011. Interior Inertial/Vision Integrated Navigation Ground Image Segmentation. Journal of Chinese Society of Rock Mechanics and Society 19, 5 (2011), 553--558.Google Scholar
- Wang T. T., Cai Z. H., and Wang Y. X.. 2017. Unit Vision Integrated Inertial Navigation Method for UAVs. Journal of Beijing University of Aeronautics and Astronautics (2017).Google Scholar
- Xu D., Han L. W., Tan M., and Li Y. F. 2009. Ceiling-based visual positioning for an indoor mobile robot with monocular vision. IEEE Transactions on Industrial Electronics 56, 5 (2009), 1617--1628.Google ScholarCross Ref
- Cao Y., Miao Q. G., Liu J. C. and Gao L. 2013. Advance and Prospects of AdaBoost Algorithm. Acta Automatica Sinica, 39, 6 (2009), 745--758.Google ScholarCross Ref
Index Terms
- A QR Code Image Processing Mechanism for Book Access Robot Positioning and Navigation
Recommendations
Mobile robot navigation based on CNN images processing: an experimental setup
This paper presents results of our work in development of a path-planning algorithm for obstacle avoidance of a mobile robot in a real workspace. The gray-scale images processing of the robot's workspace (global path-planning) is realized by using ...
Laser and vision sensing for obstacle avoidance and target seeking for a simple mobile robot
In this article, the use of laser and vision sensing for navigation of a simple fully autonomous mobile robot capable of performing obstacle detection and avoidance functions is presented. The navigation system uses image processing and laser ...
Image-Based absolute positioning system for mobile robot navigation
SSPR'06/SPR'06: Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern RecognitionPosition estimation is one of the most important functions for the mobile robot navigating in the unstructured environment. Most of previous localization schemes estimate current position and pose of a mobile robot by applying various localization ...
Comments