Skip to main content
Log in

Visual navigation of wheeled mobile robots using direct feedback of a geometric constraint

Autonomous Robots Aims and scope Submit manuscript

Abstract

Many applications of wheeled mobile robots demand a good solution for the autonomous mobility problem, i.e., the navigation with large displacement. A promising approach to solve this problem is the following of a visual path extracted from a visual memory. In this paper, we propose an image-based control scheme for driving wheeled mobile robots along visual paths. Our approach is based on the feedback of information given by geometric constraints: the epipolar geometry or the trifocal tensor. The proposed control law only requires one measurement easily computed from the image data through the geometric constraint. The proposed approach has two main advantages: explicit pose parameters decomposition is not required and the rotational velocity is smooth or eventually piece-wise constant avoiding discontinuities that generally appear in previous works when the target image changes. The translational velocity is adapted as demanded for the path and the resultant motion is independent of this velocity. Furthermore, our approach is valid for all cameras with approximated central projection, including conventional, catadioptric and some fisheye cameras. Simulations and real-world experiments illustrate the validity of the proposal.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

References

  • Argyros, A. A., Bekris, K. E., Orphanoudakis, S. C., & Kavraki, L. E. (2005). Robot homing by exploiting panoramic vision. Autonomous Robots, 19(1), 7–25.

    Article  Google Scholar 

  • Becerra, H. M., Courbon, J., Mezouar, Y., & Sagüés, C. (2010). Wheeled mobile robots navigation from a visual memory using wide field of view cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5693–5699).

  • Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2010). Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robotics and Autonomous Systems, 58(6), 796–808.

    Article  Google Scholar 

  • Becerra, H. M., López-Nicolás, G., & Sagüés, C. (2011). A sliding mode control law for mobile robots based on epipolar visual servoing from three views. IEEE Transactions on Robotics, 27(1), 175–183.

    Article  Google Scholar 

  • Chen, Z., & Birchfield, S. T. (2009). Qualitative vision-based path following. IEEE Transactions on Robotics, 25(3), 749–754.

    Article  Google Scholar 

  • Cherubini, A., & Chaumette, F. (2009). Visual navigation with a time-independent varying reference. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5968–5973).

  • Cherubini, A., & Chaumette, F. (2013). Visual navigation of a mobile robot with laser-based collision avoidance. International Journal of Robotics Research, 32(2), 189–205.

    Article  Google Scholar 

  • Cherubini, A., Chaumette, F., & Oriolo, G. (2011). Visual servoing for path reaching with nonholonomic robots. Robotica, 29(7), 1037–1048.

    Article  Google Scholar 

  • Cherubini, A., Colafrancesco, M., Oriolo, G., Freda, L., & Chaumette, F. (2009). Comparing appearance-based controllers for nonholonomic navigation from a visual memory. In Workshop on safe navigation in open and dynamic environments—Application to autonomous vehicles. IEEE International Conference on Robotics and Automation.

  • Courbon, J., Mezouar, Y., & Martinet, P. (2008). Indoor navigation of a non-holonomic mobile robot using a visual memory. Autonomous Robots, 25(3), 253–266.

    Article  Google Scholar 

  • Courbon, J., Mezouar, Y., & Martinet, P. (2009). Autonomous navigation of vehicles from a visual memory using a generic camera model. IEEE Transactions on Intelligent Transportation Systems, 10(3), 392–402.

    Article  Google Scholar 

  • De Luca, A., Oriolo, G., & Samson, C. (1998). Feedback control of a nonholonomic car-like robot. In J. P. Laumond (Ed.), Robot motion planning and control. New York: Springer.

  • Diosi, A., Segvic, S., Remazeilles, A., & Chaumette, F. (2011). Experimental evaluation of autonomous driving based on visual memory and image-based visual servoing. IEEE Transactions on Intelligent Transportation Systems, 12(3), 870–883.

    Google Scholar 

  • Fang, Y., Dixon, W. E., Dawson, D. M., & Chawda, P. (2005). Homography-based visual servo regulation of mobile robots. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 35(5), 1041–1050.

    Article  Google Scholar 

  • Geyer, C., & Daniilidis, K. (2000). An unifying theory for central panoramic systems and practical implications. In European Conference on Computer Vision (pp. 445–461).

  • Goedeme, T., Nuttin, M., Tuytelaars, T., & Gool, L. V. (2007). Omnidirectional vision based topological navigation. International Journal of Computer Vision, 74(3), 219–236.

    Article  Google Scholar 

  • Hartley, R. (1997a). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580–593.

    Article  Google Scholar 

  • Hartley, R. (1997b). Lines and points in three views and the trifocal tensor. International Journal of Computer Vision, 22(2), 125–140.

    Article  Google Scholar 

  • Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge, MA: Cambridge University Press.

    Book  MATH  Google Scholar 

  • López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010). Visual control through the trifocal tensor for nonholonomic robots. Robotics and Autonomous Systems, 58(2), 216–226.

    Article  Google Scholar 

  • López-Nicolás, G., & Sagüés, C. (2011). Vision-based exponential stabilization of mobile robots. Autonomous Robots, 30(3), 293–306.

    Article  Google Scholar 

  • Mariottini, G. L., Oriolo, G., & Prattichizzo, D. (2007). Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Transactions on Robotics, 23(1), 87–100.

    Article  Google Scholar 

  • Matsumoto, Y., Ikeda, K., Inaba, M., & Inoue, H., (1999). Visual navigation using using omnidirectional view sequence. In IEEE International Conference on Intelligent Robots and Systems (pp. 317–322).

  • Matsumoto, Y., Inaba, M., & Inoue, H. (1996). Visual navigation using view-sequenced route representation. In IEEE International Conference on Robotics and Automation (pp. 83–88).

  • Mei, C., & Rives, P. (2007). Single view point omnidirectional camera calibration from planar grids. In IEEE International Conference on Robotics and Automation (pp. 3945–3950).

  • Menegatti, E., Maeda, T., & Ishiguro, H. (2004). Image-based memory for robot navigation using properties of omnidirectional images. Robotics and Autonomous Systems, 47(4), 251–267.

    Article  Google Scholar 

  • OpenCV library [Online]. http://sourceforge.net/projects/opencvlibrary/.

  • Royer, E., Lhuillier, M., Dhome, M., & Lavest, J. M. (2007). Monocular vision for mobile robot localization and autonomous navigation. International Journal of Computer Vision, 74(3), 237–260.

    Article  Google Scholar 

  • Scaramuzza, D., & Siegwart, R. (2006). A toolbox for easy calibrating omnidirectional cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5695–5701).

  • Segvic, S., Remazeilles, A., Diosi, A., & Chaumette, F. (2009). A mapping and localization framework for scalable appearance-based navigation. Computer Vision and Image Understanding, 113(2), 172–187.

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by projects DPI 2009-08126 and DPI 2012-32100 and Grants of Banco Santander-Universidad de Zaragoza and Conacyt-México.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Héctor M. Becerra.

Appendix: interaction between visual measurements and robot velocities

Appendix: interaction between visual measurements and robot velocities

The derivation of the expressions (9) and (10) is presented next. These expressions represent the dependence of the rate of change of the visual measurements on the robot velocities. Thus, the time derivative of the x-coordinate of the current epipole (4) after simplification is given by:

$$\begin{aligned} \dot{e}_{cx}=\alpha _{x}\frac{\dot{x}y-x\dot{y}+\dot{\phi }(x^{2}+y^{2})}{ (y\cos \phi -x\sin \phi )^{2}}. \end{aligned}$$

Using the kinematic model of the camera-robot (1), we have:

$$\begin{aligned} \dot{e}_{cx}=\alpha _{x}\frac{-y\upsilon \sin \phi -x\upsilon \cos \phi +\omega (x^{2}+y^{2})}{(y\cos \phi -x\sin \phi )^{2}}, \end{aligned}$$

and using the polar coordinates (5) and some algebra:

$$\begin{aligned} \dot{e}_{cx}=\alpha _{x}\frac{\upsilon (-\sin \phi \cos \psi +\cos \phi \sin \psi )/d+\omega }{(\cos \phi \cos \psi +\sin \phi \sin \psi )^{2}}. \end{aligned}$$

Finally, using trigonometry, it turns out the interaction relationship (9):

$$\begin{aligned} \dot{e}_{cx}=-\frac{\alpha _{x}\sin \left( \phi -\psi \right) }{d\cos ^{2}\left( \phi -\psi \right) }\upsilon +\frac{\alpha _{x}}{\cos ^{2}\left( \phi -\psi \right) }\omega . \end{aligned}$$

A similar procedure is followed to obtain the time-derivative of \(T_{221}\) according to (8) and using the camera-robot model (1):

$$\begin{aligned} \dot{T}_{221}^{m}&= -\dot{x}_{2}c\phi _{2}+x_{2}\dot{\phi }_{2}s\phi _{2}-\dot{y}_{2}s\phi _{2}-y_{2}\dot{\phi }_{2}c\phi _{2}, \\ \dot{T}_{221}^{m}&= \upsilon s\phi _{2}c\phi _{2}+x_{2}\omega s\phi _{2}-\upsilon s\phi _{2}c\phi _{2}-y_{2}\omega c\phi _{2}, \\&= \left( x_{2}s\phi _{2}-y_{2}c\phi _{2}\right) \omega . \end{aligned}$$

The expression in parenthesis corresponds to the relative position between \(\mathbf{C}_{2}\) and \(\mathbf{C}_{3},\) i.e., \(t_{y_{2}}=T_{223}^{m},\) so that:

$$\begin{aligned} \dot{T}_{221}^{m}=T_{223}^{m}\omega . \end{aligned}$$

Finally, by dividing both sides of the equation by the constant element \(T_{232}\) the normalized expression (10) is obtained:

$$\begin{aligned} \dot{T}_{221}=T_{223}\omega . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Becerra, H.M., Sagüés, C., Mezouar, Y. et al. Visual navigation of wheeled mobile robots using direct feedback of a geometric constraint. Auton Robot 37, 137–156 (2014). https://doi.org/10.1007/s10514-014-9382-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-014-9382-3

Keywords

Navigation