Abstract
Many applications of wheeled mobile robots demand a good solution for the autonomous mobility problem, i.e., the navigation with large displacement. A promising approach to solve this problem is the following of a visual path extracted from a visual memory. In this paper, we propose an image-based control scheme for driving wheeled mobile robots along visual paths. Our approach is based on the feedback of information given by geometric constraints: the epipolar geometry or the trifocal tensor. The proposed control law only requires one measurement easily computed from the image data through the geometric constraint. The proposed approach has two main advantages: explicit pose parameters decomposition is not required and the rotational velocity is smooth or eventually piece-wise constant avoiding discontinuities that generally appear in previous works when the target image changes. The translational velocity is adapted as demanded for the path and the resultant motion is independent of this velocity. Furthermore, our approach is valid for all cameras with approximated central projection, including conventional, catadioptric and some fisheye cameras. Simulations and real-world experiments illustrate the validity of the proposal.
References
Argyros, A. A., Bekris, K. E., Orphanoudakis, S. C., & Kavraki, L. E. (2005). Robot homing by exploiting panoramic vision. Autonomous Robots, 19(1), 7–25.
Becerra, H. M., Courbon, J., Mezouar, Y., & Sagüés, C. (2010). Wheeled mobile robots navigation from a visual memory using wide field of view cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5693–5699).
Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2010). Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robotics and Autonomous Systems, 58(6), 796–808.
Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2011). A sliding mode control law for mobile robots based on epipolar visual servoing from three views. IEEE Transactions on Robotics, 27(1), 175–183.
Chen, Z., & Birchfield, S. T. (2009). Qualitative vision-based path following. IEEE Transactions on Robotics, 25(3), 749–754.
Cherubini, A., & Chaumette, F. (2009). Visual navigation with a time-independent varying reference. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5968–5973).
Cherubini, A., & Chaumette, F. (2013). Visual navigation of a mobile robot with laser-based collision avoidance. International Journal of Robotics Research, 32(2), 189–205.
Cherubini, A., Chaumette, F., & Oriolo, G. (2011). Visual servoing for path reaching with nonholonomic robots. Robotica, 29(7), 1037–1048.
Cherubini, A., Colafrancesco, M., Oriolo, G., Freda, L., & Chaumette, F. (2009). Comparing appearance-based controllers for nonholonomic navigation from a visual memory. In Workshop on safe navigation in open and dynamic environments—Application to autonomous vehicles. IEEE International Conference on Robotics and Automation.
Courbon, J., Mezouar, Y., & Martinet, P. (2008). Indoor navigation of a non-holonomic mobile robot using a visual memory. Autonomous Robots, 25(3), 253–266.
Courbon, J., Mezouar, Y., & Martinet, P. (2009). Autonomous navigation of vehicles from a visual memory using a generic camera model. IEEE Transactions on Intelligent Transportation Systems, 10(3), 392–402.
De Luca, A., Oriolo, G., & Samson, C. (1998). Feedback control of a nonholonomic car-like robot. In J. P. Laumond (Ed.), Robot motion planning and control. New York: Springer.
Diosi, A., Segvic, S., Remazeilles, A., & Chaumette, F. (2011). Experimental evaluation of autonomous driving based on visual memory and image-based visual servoing. IEEE Transactions on Intelligent Transportation Systems, 12(3), 870–883.
Fang, Y., Dixon, W. E., Dawson, D. M., & Chawda, P. (2005). Homography-based visual servo regulation of mobile robots. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 35(5), 1041–1050.
Geyer, C., & Daniilidis, K. (2000). An unifying theory for central panoramic systems and practical implications. In European Conference on Computer Vision (pp. 445–461).
Goedeme, T., Nuttin, M., Tuytelaars, T., & Gool, L. V. (2007). Omnidirectional vision based topological navigation. International Journal of Computer Vision, 74(3), 219–236.
Hartley, R. (1997a). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580–593.
Hartley, R. (1997b). Lines and points in three views and the trifocal tensor. International Journal of Computer Vision, 22(2), 125–140.
Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge, MA: Cambridge University Press.
López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010). Visual control through the trifocal tensor for nonholonomic robots. Robotics and Autonomous Systems, 58(2), 216–226.
López-Nicolás, G., & Sagüés, C. (2011). Vision-based exponential stabilization of mobile robots. Autonomous Robots, 30(3), 293–306.
Mariottini, G. L., Oriolo, G., & Prattichizzo, D. (2007). Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Transactions on Robotics, 23(1), 87–100.
Matsumoto, Y., Ikeda, K., Inaba, M., & Inoue, H., (1999). Visual navigation using using omnidirectional view sequence. In IEEE International Conference on Intelligent Robots and Systems (pp. 317–322).
Matsumoto, Y., Inaba, M., & Inoue, H. (1996). Visual navigation using view-sequenced route representation. In IEEE International Conference on Robotics and Automation (pp. 83–88).
Mei, C., & Rives, P. (2007). Single view point omnidirectional camera calibration from planar grids. In IEEE International Conference on Robotics and Automation (pp. 3945–3950).
Menegatti, E., Maeda, T., & Ishiguro, H. (2004). Image-based memory for robot navigation using properties of omnidirectional images. Robotics and Autonomous Systems, 47(4), 251–267.
OpenCV library [Online]. http://sourceforge.net/projects/opencvlibrary/.
Royer, E., Lhuillier, M., Dhome, M., & Lavest, J. M. (2007). Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, 74(3), 237–260.
Scaramuzza, D., & Siegwart, R. (2006). A toolbox for easy calibrating omnidirectional cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5695–5701).
Segvic, S., Remazeilles, A., Diosi, A., & Chaumette, F. (2009). A mapping and localization framework for scalable appearance-based navigation. Computer Vision and Image Understanding, 113(2), 172–187.
Acknowledgments
This work was supported by projects DPI 2009-08126 and DPI 2012-32100 and Grants of Banco Santander-Universidad de Zaragoza and Conacyt-México.
Author information
Authors and Affiliations
Corresponding author
Appendix: interaction between visual measurements and robot velocities
Appendix: interaction between visual measurements and robot velocities
The derivation of the expressions (9) and (10) is presented next. These expressions represent the dependence of the rate of change of the visual measurements on the robot velocities. Thus, the time derivative of the x-coordinate of the current epipole (4) after simplification is given by:
Using the kinematic model of the camera-robot (1), we have:
and using the polar coordinates (5) and some algebra:
Finally, using trigonometry, it turns out the interaction relationship (9):
A similar procedure is followed to obtain the time-derivative of \(T_{221}\) according to (8) and using the camera-robot model (1):
The expression in parenthesis corresponds to the relative position between \(\mathbf{C}_{2}\) and \(\mathbf{C}_{3},\) i.e., \(t_{y_{2}}=T_{223}^{m},\) so that:
Finally, by dividing both sides of the equation by the constant element \(T_{232}\) the normalized expression (10) is obtained:
Rights and permissions
About this article
Cite this article
Becerra, H.M., Sagüés, C., Mezouar, Y. et al. Visual navigation of wheeled mobile robots using direct feedback of a geometric constraint. Auton Robot 37, 137–156 (2014). https://doi.org/10.1007/s10514-014-9382-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10514-014-9382-3