Elsevier

Robotics and Autonomous Systems

Volume 57, Issue 9, 30 September 2009, Pages 931-942
Robotics and Autonomous Systems

Consistent triangulation for mobile robot localization using discontinuous angular measurements

https://doi.org/10.1016/j.robot.2009.06.001Get rights and content

Abstract

Localization is a fundamental operation for the navigation of mobile robots. The standard localization algorithms fuse external measurements of the environment with the odometric evolution of the robot pose to obtain its optimal estimation. In this work, we present a different approach to determine the pose using angular measurements discontinuously obtained in time. The presented method is based on an Extended Kalman Filter (EKF) with a state-vector composed of the external angular measurements. This algorithm keeps track of the angles between actual measurements from robot odometric information. This continuous angular estimation allows the consistent use of the triangulation methods to determine the robot pose at any time during its motion. The article reports experimental results that show the localization accuracy obtained by means of the presented approach. These results are compared to the ones obtained applying the EKF algorithm with the standard pose state-vector. For the experiments, an omnidirectional robotic platform with omnidirectional wheels is used.

Introduction

Localization of mobile robots (i.e., the process of estimating the robot pose–position and orientation–within the working environment) is a fundamental operation to reach robot autonomous navigation. Assuming that the motion of the robot frame is planar, which holds for most of the indoor applications (e.g., manufacturing industry, hospitals), the robot pose can be defined by the position coordinates x(t) and y(t) of a robot point and the absolute orientation angle ψ(t). In the last decades, a number of different approaches to solve the robot localization problem have been proposed. Borenstein et al. [1] classified these methods in two general groups: relative localization and absolute localization.

In relative localization, internal measurements are used to determine the evolution of the robot pose by time integration of the kinematic equations. The two main methods in this field are inertial navigation [2], [3], in which the measured data are accelerations of robot points and angular velocities; and odometry [4], in which the measured data are wheel rotation angles. Odometry, because of its low cost and high updating rate, is a widely used technique. However, its unbounded growth of time integration errors with the distance traveled by the robot is unavoidable and represents a significant inconvenience [5]. Propagation of odometry errors is an important topic of research in mobile robotics, and many works have been proposed either to calibrate the systematic errors [6], [7], or to model the non-systematic random errors [8].

In absolute localization, the pose is determined from its geometric relationship with external measurements of the robot workspace. The sensors used (e.g., laser scanners, ultrasound sensors or cameras) provide angular or distance measurements, relative to the robot, associated with distinct features of the environment. There are several methods to cope with absolute localization. The first group of them apply when the measurements correspond to the same robot pose (geometric methods). In such a case, the robot pose can be straightforwardly obtained from the measured data and its geometric relationship with the robot pose. This is the case of triangulation, which determines the robot pose from angular measurements associated with a set of known points or landmarks [9], [10], [11], [12], [13], [14], [15]; or trilateration, which is used when the measured data are distances to known points [16].

Nevertheless, when the measurements are discontinuously obtained in time and the robot moves, they do not correspond to the same robot pose, and therefore geometric methods alone cannot be consistently applied [17]. To solve this problem, most of the methods integrate a prediction phase, based on odometry, and a correction phase that uses external measurements to correct the odometry error and get the optimal estimation of the pose. The most widely accepted tool to perform this data fusion is the Extended Kalman Filter (EKF) [18], [19], [20], [21]. Some remarkable work using the EKF for localization purposes can be found in [22], [23], [24]. Other approaches used to fuse odometry with external measurements are the so-called probabilistic methods [25], [26], which are robust in case of complex and badly mapped environments, and the methods that use interval analysis to deal with bounded-error pose estimation [27].

Other concepts that are becoming important in mobile robotics literature are SLAM, which stands for “Simultaneous Localization And Mapping”, and qualitative navigation. In qualitative navigation, the need for distance information and metrically accurate maps is avoided [28]. SLAM methods do not require any previous knowledge of the environment and, therefore, they are useful for applications in which the robot workspace may change [29]. This high flexibility is however counterbalanced by a lower accuracy, since the map of the environment is estimated prior to the localization process.

The aim of this article is to improve the accuracy of the standard methods that consider that the map of the environment, and the position of landmarks are accurately known. It is important to point out that high accuracy is usually required in industrial applications, e.g., in order to avoid collisions. Furthermore, in those applications, the robot environment may not change in time.

In this article, a novel technique to estimate the robot pose using odometry and angular discontinuous measurements is presented. The angles are assumed to be measured by a laser localization system, which consists of a rotating laser scanner and a set of catadioptric landmarks. The sensor emits a laser beam that horizontally sweeps the environment and reflects back when it detects a landmark. Then, the angle of the reflected beam relative to the robot frame is measured. Since landmarks are detected from different poses during the robot motion, triangulation cannot be consistently applied directly from the sensor measurements. A new definition of the EKF state-space is proposed to enable the use of triangulation. A state-vector composed of the external angular measurements (“angular state-vector”) is introduced instead of the standard state-vector composed of the robot position and orientation (“pose state-vector”).

The filter fuses the angular odometric evolution with each actual measurement to obtain, at any time, the optimal estimation of the defined state-vector. Then, since the estimated angles correspond to the same robot pose, triangulation can be consistently applied to determine the robot pose during its motion [17], [30]. The proposed angular state-vector greatly simplifies the measurement equation of the EKF, because its components are directly measured by the sensor. The linearity of this equation improves the convergence of the filter compared to the standard use of the EKF, which presents a nonlinear measurement model.

The article is structured as follows. Sections 2 Geometric localization using triangulation, 3 Error analysis for the robot position obtained by using triangulation focus on the triangulation algorithm and its error propagation, respectively. Next, in Section 4 the proposed angular-state EKF to enable the consistent use of triangulation for a general robot kinematics is presented. In Section 5 the kinematic model of the robot prototype used in the experiments is presented. This robot has three omnidirectional wheels, which are actuated by three motors that allow the control of the 3 DOF of the robot planar motion. Section 6 shows the results of experiments that test the accuracy of the method. Finally, Section 7 presents the main conclusions of this work.

Section snippets

Geometric localization using triangulation

Triangulation is the operation used in mobile robotics to determine the position and the orientation of the robot from angular measurements, relative to the robot, of the lines connecting one of its points and a minimum of three landmarks located at known sites. When a set of more than three landmarks is used, the geometrical problem is over-determined and optimization techniques must be used in order to find the solution minimizing the error. In this article, the determined triangulation

Error analysis for the robot position obtained by using triangulation

This section analyzes the propagation of the error in the input angles θi, i=1,2,3, through the triangulation equation to determine the robot position p, Eq. (A.7). Under the assumption that these measurements are corrupted by zero-mean uncorrelated random noise with a Gaussian probability density function, explicit expressions for the variance and bias error of the robot position estimation can be obtained as follows.

Let δθi and θi0 denote the additive Gaussian random error and the actual

Dynamic angular state estimator based on Kalman filtering

This section presents the proposed algorithm to enable the consistent use of triangulation when the robot moves and angular measurements are discontinuously obtained in time. As it has been pointed out before, in such a case the measurements correspond to different robot poses and they cannot be directly used to determine the robot pose by means of triangulation [17], [30]. We propose a method to dynamically estimate the relative angles θi(t), i=1,2,3, even when there is no direct measurement

Kinematic model of the omnidirectional robot SPHERIK-3×3

A mobile robot with omnidirectional wheels that consist of freely rotating spherical rollers is considered in this work. Fig. 6 shows the real robot prototype SPHERIK- 3×3 equipped with the studied laser sensor. Each omnidirectional wheel is actuated by a single motor. The three motors allow the control of the three DOF of the robot planar motion. Fig. 7a shows an omnidirectional wheel of the robot, which has a free motion perpendicular to the actuated or motorized motion.

This wheel design

Experiments

The proposed method has been tested on the mobile robot SPHERIK- 3×3 and the results have been compared to those obtained by means of the usual pose state-vector EKF [33], [34], [35], [36]. A metrological system has been designed in order to determine the pose of the robot more accurately. This accurate measure of the robot pose is used as a reference to determine the errors of both the presented method and the usual one.

Conclusions

In this work, an EKF-based estimation algorithm that tracks the evolution of the state-vector composed of the external angular measurements has been presented. The measurements are assumed to be discontinuously obtained in time, which is the case of the considered laser localization system. Between actual measurements, the angles are dynamically estimated, based on their odometric evolution. The goal of this algorithm is to have a linear observation model which makes the approximation errors

Acknowledgments

This work has been supported by the Reference Center for Advanced Production Techniques (CeRTAP) of the Autonomous Government of Catalonia. This support is gratefully acknowledged.

We also acknowledge the support of Jordi Munar and Antonio Benito Martínez, from the Department of Systems Engineering, Automatics and Industrial Informatics of UPC, in the design of the experimental rig.

Josep M. Font-Llagunes received his M.Eng. degrees in Mechanical Engineering (2002) and Biomedical Engineering (2004), both from the Technical University of Catalonia (UPC). In 2007, he received his Doctor of Engineering degree (with honors) from the University of Girona. Since 2004 he has been an assistant lecturer of Mechanics at UPC, where he has been involved in numerous research projects of the Reference Center for Advanced Production Techniques (CeRTAP) of the Autonomous Government of

References (42)

  • K.O. Arras et al.

    Multisensor on-the-fly localization: Precision and reliability for applications

    Journal of Robotics and Autonomous Systems

    (2001)
  • J.A. Batlle et al.

    Holonomy in mobile robots

    Robotics and Autonomous Systems

    (2009)
  • J. Borenstein et al.

    Mobile robot positioning — Sensors and techniques

    Journal of Robotic Systems

    (1997)
  • B. Barshan et al.

    Inertial navigation systems for mobile robots

    IEEE Transactions on Robotics and Automation

    (1995)
  • B. Liu, M. Adams, J. Ibañez-Guzmán, Multi-aided inertial navigation for ground vehicles in outdoor uneven environments,...
  • C.M. Wang, Location estimation and uncertainty analysis for mobile robots, in: Proceedings of the IEEE International...
  • A. Kelly

    Linearized error propagation in odometry

    International Journal of Robotics Research

    (2004)
  • J. Borenstein et al.

    Measurement and correction of systematic odometry errors in mobile robots

    IEEE Transactions on Robotics and Automation

    (1996)
  • A. Martinelli et al.

    Simultaneous localization and odometry self calibration for mobile robot

    Journal of Autonomous Robots

    (2007)
  • K.S. Chong, L. Kleeman, Accurate odometry and error modelling for a mobile robot, in: Proceedings of the IEEE...
  • M. Betke et al.

    Mobile robot localization using landmarks

    IEEE Transactions on Robotics and Automation

    (1997)
  • K. Briechle et al.

    Localization of a mobile robot using relative bearing measurements

    IEEE Transactions on Robotics and Automation

    (2004)
  • C. Cohen, F. Koss, A comprehensive study of three object triangulation, in: Proceedings of the SPIE Conference on...
  • J.M. Font, J.A. Batlle, Dynamic triangulation for mobile robot localization using an angular state Kalman filter, in:...
  • J.M. Font, J.A. Batlle, Mobile robot localization. Revisiting the triangulation methods, in: Proceedings of the IFAC...
  • A. Kelly, Precision dilution in triangulation based mobile robot position estimation, in: Proceedings of the...
  • C.D. McGillem et al.

    A beacon navigation method for autonomous vehicles

    IEEE Transactions on Vehicular Technology

    (1989)
  • F. Thomas et al.

    Revisiting trilateration for robot localization

    IEEE Transactions on Robotics and Automation

    (2005)
  • T. Skewis et al.

    Experiments with a mobile robot operating in a cluttered unknown environment

    Journal of Robotic Systems

    (1994)
  • R.E. Kalman et al.

    New results in linear filtering and prediction theory

    Transactions of the ASME, Journal of Basic Engineering

    (1961)
  • R.E. Kalman

    A new approach to linear filtering and prediction problems

    Transactions of the ASME, Journal of Basic Engineering

    (1960)
  • Cited by (27)

    • Sensitivity-based data fusion for optical localization of a mobile robot

      2021, Mechatronics
      Citation Excerpt :

      Each arc spans between one unique pairing of the beacons and passes through all possible coordinates of the target. Thus the intersection point of two of these arcs leads to the position of the target [3–5]. A form of Geometric Triangulation is implemented by Sergiyenko and coworkers in [6–8] by finding the bearing angles of a laser transmitter and a receiver necessary for the receiver to detect the light from the transmitter that is deflected off the object of interest.

    • Calibration for mobile robots with an invariant Jacobian

      2010, Robotics and Autonomous Systems
      Citation Excerpt :

      The laser sensor used is the model LS6 from Guidance Control Systems. The scanner rotates at 8 Hz and delivers an angular accuracy in the measurements of 0.1 mrad leading to position and orientation errors below 1 mm and 1 mrad, respectively [19,26]. The position and orientation errors are reduced significantly in all cases.

    View all citing articles on Scopus

    Josep M. Font-Llagunes received his M.Eng. degrees in Mechanical Engineering (2002) and Biomedical Engineering (2004), both from the Technical University of Catalonia (UPC). In 2007, he received his Doctor of Engineering degree (with honors) from the University of Girona. Since 2004 he has been an assistant lecturer of Mechanics at UPC, where he has been involved in numerous research projects of the Reference Center for Advanced Production Techniques (CeRTAP) of the Autonomous Government of Catalonia. During the academic year 2007–2008, he was awarded with a Mobility Scholarship from UPC to pursue postdoctoral research at the Centre for Intelligent Machines (McGill University, Canada). His research interests cover mobile robot localization, simulation of multibody mechanical systems, and dynamics of bipedal locomotion.

    Joaquim A. Batlle was born in Barcelona in 1943. He was awarded his M.Eng. in Mechanical Engineering in 1968 and his Ph.D. in 1975. Upon graduation he joined the Technical University of Catalonia (UPC) where he is a full Professor of Mechanical Engineering and the head of the research line on Mechanics and Acoustics. He is a member of the Royal Academy of Sciences and Arts of Barcelona (1991), and also a member of the American Society of Mechanical Engineering (1996) and the Acoustical Society of America (1990). His research work is mainly related to the mechanics of mobile robots, to percussive dynamics and to the acoustics of waveguides and woodwinds.

    View full text