Abstract
This paper presents a visual homing method for a robot moving on the ground plane. The approach employs a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment, and the current image taken by the robot. We present as contribution a method to obtain the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion and provides important robustness properties to our technique. Another contribution of our paper is a new control law that uses the available angles, with no range information involved, to drive the robot to the goal. Therefore, our method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. We present a formal proof of the stability of the proposed control law. The performance of our approach is illustrated through simulations and different sets of experiments with real images.
Similar content being viewed by others
References
Aranda, M., López-Nicolás, G., & Sagüés, C. (2010). Omnidirectional visual homing using the 1D trifocal tensor. In IEEE international conference on robotics and automation (pp. 2444–2450).
Argyros, A. A., Bekris, K. E., Orphanoudakis, S. C., & Kavraki, L. E. (2005). Robot homing by exploiting panoramic vision. Autonomous Robots, 19(1), 7–25.
Åström, K., & Oskarsson, M. (2000). Solutions and ambiguities of the structure and motion problem for 1D retinal vision. Journal of Mathematical Imaging and Vision, 12(2), 121–135.
Basri, R., Rivlin, E., & Shimshoni, I. (1999). Visual homing: Surfing on the epipoles. International Journal of Computer Vision, 33(2), 117–137.
Becerra, H., Lopez-Nicolas, G., & Sagues, C. (2010). Omnidirectional visual control of mobile robots based on the 1D trifocal tensor. Robotics and Autonomous Systems, 58(6), 796–808.
Booij, O., Terwijn, B., Zivkovic, Z., & Kröse, B. (2007). Navigation using an appearance based topological map. In IEEE international conference on robotics and automation (pp. 3927–3932).
Booij, O., Zivkovic, Z., & Kröse, B. (2006). Sparse appearance based modeling for robot localization. In IEEE international conference on intelligent robots and systems (pp. 1510–1515).
Chaumette, F., & Hutchinson, S. (2006). Visual servo control, part I: Basic approaches. IEEE Robotics and Automation Magazine, 13(4), 82–90.
Chen, J., Dixon, W., Dawson, M., & McIntyre, M. (2006). Homography-based visual servo tracking control of a wheeled mobile robot. IEEE Transactions on Robotics, 22(2), 407–416.
Cherubini, A., & Chaumette, F. (2011). Visual navigation with obstacle avoidance. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1593–1598).
Chesi, G., & Hashimoto, K. (2004). A simple technique for improving camera displacement estimation in eye-in-hand visual servoing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1239–1242.
Chesi, G., & Hashimoto, K. (Eds.). (2010). Visual servoing via advanced numerical methods. In Lecture notes in control and information sciences (Vol. 401). New York: Springer.
Churchill, D., & Vardy, A. (2008). Homing in scale space. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1307–1312).
Courbon, J., Mezouar, Y., & Martinet, P. (2008). Indoor navigation of a non-holonomic mobile robot using a visual memory. Autonomous Robots, 25(3), 253–266.
Dellaert, F., & Stroupe, A. W. (2002). Linear 2D localization and mapping for single and multiple robot scenarios. In IEEE international conference on robotics and automation (pp. 688–694).
DeSouza, G. N., & Kak, A. C. (2002). Vision for mobile robot navigation: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(2), 237–267.
Franz, M. O., Schölkopf, B., Georg, P., Mallot, H. A., & Bülthoff, H. H. (1998). Learning view graphs for robot navigation. Autonomous Robots, 5(1), 111–125.
Goedemé, T., Nuttin, M., Tuytelaars, T., & Van Gool, L. (2007). Omnidirectional vision based topological navigation. International Journal of Computer Vision, 74(3), 219–236.
Guerrero, J. J., Murillo, A. C., & Sagüés, C. (2008). Localization and matching using the planar trifocal tensor with bearing-only data. IEEE Transactions on Robotics, 24(2), 494–501.
Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press.
Hong, J., Tan, X., Pinette, B., Weiss, R., & Riseman, E. M. (1992). Image-based homing. Control Systems Magazine, IEEE, 12(1), 38–45.
Khalil, H. K. (2001). Nonlinear systems (3rd ed.). New York: Prentice Hall.
Lambrinos, D., Möller, R., Labhart, T., Pfeifer, R., & Wehner, R. (2000). A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems, 30(1–2), 39–64.
Lim, J., & Barnes, N. (2009). Robust visual homing with landmark angles. In Proceedings of robotics: Science and systems, Seattle.
López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010a). Multiple homographies with omnidirectional vision for robot homing. Robotics and Autonomous Systems, 58(6), 773–783.
López-Nicolás, G., Guerrero, J. J., & Sagüés, C. (2010b). Visual control through the trifocal tensor for nonholonomic robots. Robotics and Autonomous Systems, 58(2), 216–226.
López-Nicolás, G., & Sagüés, C. (2011). Vision-based exponential stabilization of mobile robots. Autonomous Robots, 30(3), 293–306.
López-Nicolás, G., Sagüés, C., Guerrero, J., Kragic, D., & Jensfelt, P. (2008). Switching visual control based on epipoles for mobile robots. Robotics and Autonomous Systems, 56(7), 592–603.
Lowe, D. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.
Möller, R., Vardy, A., Kreft, S., & Ruwisch, S. (2007). Visual homing in environments with anisotropic landmark distribution. Autonomous Robots, 23(3), 231–245.
Quan, L. (2001). Two-way ambiguity in 2D projective reconstruction from three uncalibrated 1D images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 212–216.
Shademan, A., & Jägersand, M. (2010). Three-view uncalibrated visual servoing. In IEEE international conference on intelligent robots and systems (pp. 6234–6239).
Shashua, A., & Werman, M. (1995). Trilinearity of three perspective views and its associated tensor. In International conference on computer vision (pp. 920–925).
Slotine, J.-J. E., & Li, W. (1991). Applied nonlinear control. New York: Prentice Hall.
Stürzl, W., & Mallot, H. A. (2006). Efficient visual homing based on Fourier transformed panoramic images. Robotics and Autonomous Systems, 54(4), 300–313.
Weber, K., Venkatesh, S., & Srinivasany, M. V. (1998). Insect-inspired robotic homing. Adaptive Behavior, 7(1), 65–97.
Zivkovic, Z., Bakker, B., & Kröse, B. (2006). Hierarchical map building and planning based on graph partitioning. In IEEE international conference on robotics and automation (pp. 803–809).
Zivkovic, Z., Booij, O., & Krose, B. (2007). From images to rooms. Robotics and Autonomous Systems, 55(5), 411–418.
Acknowledgments
This work was supported by Ministerio de Ciencia e Innovación/European Union (projects DPI2009-08126 and DPI2012-32100), by Ministerio de Educación under FPU grant AP2009-3430, and by DGA-FSE (group T04).
Author information
Authors and Affiliations
Corresponding author
1 Appendix
1 Appendix
1.1 1.1 Global asymptotic stability
Theorem 1
The system under the proposed control law (11), (12) is globally asymptotically stable if \(k_{\omega }>k_{v}\cdot \pi /d_{min}.\)
Proof
We will use Lyapunov techniques (Khalil 2001) to analyze the stability of the system. We define the following definite positive candidate Lyapunov function:
where \(\rho \) is the distance between the current and goal positions, and \(\mathbf{x}\) is the state of the system, determined by \(\rho \) and \(\alpha _{CG}.\) The two state variables we use are a suitable choice, since we are only interested in reaching the goal position, regardless of the final orientation of the robot. As can be seen, both \(V\) and \(\dot{V}\) are continuous functions.
We note at this moment that the equilibrium in our system occurs at the two following points: \((\rho ,\alpha _{CG})=(0,0)\) and \((\rho ,\alpha _{CG})=(0,\pi ),\) which correspond to the situations where the robot reaches the goal moving forwards or backwards, respectively. In order to account for the multiple equilibria, in the following we use the global invariant set theorem (Slotine and Li 1991) to prove the asymptotic stability of the system.
What we need to show is that \(V\) is radially unbounded and \(\dot{V}\) is negative semi-definite over the whole state space. It is straightforward that \(V(\mathbf{x})\) is radially unbounded, given that \(V(\mathbf{x})\rightarrow \infty \) as \(\Vert \mathbf{x}\Vert \rightarrow \infty .\) Next, we prove that the derivative \(\dot{V}(\mathbf{x})\) is negative definite. For our chosen candidate Lyapunov function, this derivative is as follows:
We will suppose that the vehicle on which the control method is to be implemented is a nonholonomic unicycle platform. The dynamics of the system as a function of the input velocities is then given, using the derivatives in polar coordinates with the origin at the goal, by \(\dot{\rho }=-v \cos (\alpha _{CG})\) and \(\dot{\alpha }_{CG}=-\omega + v \sin (\alpha _{CG})/\rho .\) Using the control velocities (11), (12) we obtain
By definition \(\rho \ge 0\) and \(S_i \ge 0.\) It is then straightforward to see that the first and the second term of (15) are negative definite. However, the third term can be positive. The interpretation is that for the system to be stable, the convergence speed provided by the angular velocity has to be higher than the convergence speed given by the linear velocity. Otherwise, the angular error is not corrected fast enough and the robot will move following spirals around the goal. Still, the stability can be guaranteed if the control gains are selected properly. From (15) we can see that it is guaranteed that \(\dot{V}<0\) if the following inequality holds:
This is equivalent to the following condition on the angular velocity gain:
We aim to find an upper bound to the right side of (17). We start by analyzing the first fraction. Since \(\alpha _{CG}^d\) is equal to either \(0\) or \(\pi ,\) and \(\sin (\alpha _{CG})=-\sin (\alpha _{CG}-\pi ),\) we have:
as \(\sin (\alpha _{CG})/\alpha _{CG}\) is a sinc function, whose maximum absolute value occurs at \(\alpha _{CG}=0\) and equals 1. We now look for a bound to the \(S_{i}/\rho \) term in (17). The angular sector \(S_{i}\) seen from reference view \(i\) has a value lying in the interval \(0 \le S_{i} \le \pi .\) We will study two subintervals separately:
-
\(0 \le S_{i} \le \pi /2.\) Applying the law of sines on the triangle defined by vertices \(C,\,G\) and \(R_{i}\) in Fig. 5, the addend in (17) corresponding to reference view \(i\) becomes:
$$\begin{aligned} \frac{S_{i}}{\rho }=\frac{S_{i}}{\sin (S_{i})}\cdot \frac{\sin (\widehat{CR_{i}G})}{d_{i}} \le \frac{\pi }{2\cdot d_{min}} \end{aligned}$$(19)The first fraction of the product in (19) is a function of \(S_{i}\) whose value equals 1 at \(S_{i}=0\) and increases monotonically to a value of \(\pi /2\) at \(S_{i}=\pi /2,\) which is the limit of the interval we are considering. Since the second fraction has an upper bound equal to \(1/d_{min},\) the product of the two is upper-bounded by \(\pi /(2\cdot d_{min}).\)
-
\(\pi /2 < S_{i} \le \pi .\) In this case, \(\rho >d_{i},\) and an upper bound is readily found for the addend in (17) corresponding to reference view \(i\):
$$\begin{aligned} \frac{S_{i}}{\rho } \le \frac{\pi }{d_{min}}. \end{aligned}$$(20)
Thus, the contribution of each of the reference views to the sum is upper-bounded by the higher of the two bounds in (19) and (20), which is \(\frac{\pi }{d_{min}}.\) The mean of all the individual contributions is therefore bounded by this value, i.e.:
and inequality (17) becomes:
\(\square \)
1.2 1.2 Local exponential stability
Proposition 1
The system under the proposed control law (11), (12) is locally exponentially stable.
Proof
We analyze the behavior of the system locally, i.e. assuming the orientation of the robot has already been corrected (\(\alpha _{CG}=\alpha _{CG}^d\)). The dynamics of the distance from the goal for the unicycle vehicle considered is then given by:
Now, taking into account that \(S_{i}\ge \sin S_{i}\) in all the interval of possible values (\(0\le S_{i}\le \pi \)), we have:
It can be readily seen, looking at Fig. 5, that for any given current position \(C\) of the robot, \(\sin (\widehat{CR_{i}G})\) will be greater than zero for at least one \(R_{i}\) as long as there are at least three reference views (including the goal) and their locations are not collinear. Thus, there exists a positive value \(\lambda _{min}\) such that
From (24) and (25) it can be concluded that the local convergence to the target state is bounded by an exponential decay, i.e. the system is locally exponentially stable. \(\square \)
Rights and permissions
About this article
Cite this article
Aranda, M., López-Nicolás, G. & Sagüés, C. Angle-based homing from a reference image set using the 1D trifocal tensor. Auton Robot 34, 73–91 (2013). https://doi.org/10.1007/s10514-012-9313-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10514-012-9313-0