Robust estimation of walking robots velocity and tilt using proprioceptive sensors data fusion

https://doi.org/10.1016/j.robot.2014.12.012Get rights and content

Highlights

  • A method of velocity and tilt estimation in mobile, possibly legged robots based on on-board sensors.

  • Robustness to inertial sensor biases, and observations of low quality or temporal unavailability.

  • A simple framework for modeling of legged robot kinematics with foot twist taken into account.

Abstract

Availability of the instantaneous velocity of a legged robot is usually required for its efficient control. However, estimation of velocity only on the basis of robot kinematics has a significant drawback: the robot is not in touch with the ground all the time, or its feet may twist. In this paper we introduce a method for velocity and tilt estimation in a walking robot. This method combines a kinematic model of the supporting leg and readouts from an inertial sensor. It can be used in any terrain, regardless of the robot’s body design or the control strategy applied, and it is robust in regard to foot twist. It is also immune to limited foot slide and temporary lack of foot contact.

Introduction

Knowledge of a robot’s state, i.e., orientation, velocity, and acceleration, is crucial in order to achieve good performance from most legged robots’ locomotion and posture controllers, autonomous navigation systems, and path planning systems  [1]. Localization systems often combine measurements from proprioceptive sensors that monitor a robot’s motion with data collected by exteroceptive sensors that provide information about the neighboring environment  [2], [1]. Proprioceptive sensors usually measure physical properties such as joint position and velocity or motor torque to determine the state of the robot’s body. Exteroceptive sensor techniques are applied to derive direct estimates of the robot’s motion and determine the orientation of the robot in the external world; for example, laser scan matching and vision based localization  [3].

For a control task the robot should determine its state during movement, and therefore should obtain information about its position, orientation, velocity, and acceleration. While it is possible to directly and precisely measure acceleration and position  [4], velocity measurement and estimation are usually more difficult  [5], [6]. Furthermore, for lightweight, autonomous, legged robots the problem arises of providing a set of sensors that can estimate the full body state with sufficient frequency for proper motor control (∼1 kHz) in light of limitations on computational power and onboard instrumentation.

Sensors used for mobile robot position measurement include digital encoders, cameras  [7], [8], and global positioning system (GPS) devices  [9], [10], [11]. Usually GPS measurements are of low accuracy and can be used only outdoors, while vision based methods require massive computations for analysis of large images, and are sensitive to light conditions and other disturbances. Therefore most popular sensors are digital encoders; but in the case of legged robotics, in order to obtain a robot’s center of mass position information, kinematics calculations are needed. Besides, the noise in these signals is difficult to handle and significantly affects the precision of computations.

Velocity estimation is possible through integration of readouts from Inertial Measurement Units (IMUs)  [4], [9]. IMU is a combination of accelerometers and gyroscopes that measure acceleration and angular velocity. These measurements and their integrals are biased and affected by noise. The error in the integral increases over time, creating the so-called drift effect  [12]. IMUs that are sufficiently accurate to be useful for velocity estimation are large, heavy, and expensive. The recent availability of small and inexpensive IMUs has made it possible to use them in autonomous legged robots, even though they are quite inaccurate  [6].

Sensor fusion for state estimation has become very popular in the field of robotics because it significantly improves the precision of measurements. The main estimation technique for combining different information sources is Extended Kalman Filter (EKF)  [13]. Practical implementations of multiple-sensor fusion have been adopted in mobile robotics  [14], underwater robotics  [15], [16], underground robotics  [17] and unmanned air vehicles  [18]. There are also biomedical applications, such as motion analysis systems  [19] and motor control systems for handicapped individuals  [20].

It can be shown  [21], [12], [8], [22], [23], [24] that joint position, foot contact, robot kinematic model, and IMU measurement data can be successfully applied to legged robot state estimation. Lin et al.  [21] introduced a body pose estimation system for a hexapod robot—3 DOF (Degrees of Freedom) describing center of mass (COM) translation (the so-called positioning problem) and 3-DOF describing the orientation of the body relative to a fixed inertial frame. Also in  [21], the authors show that the traditional leg kinematics model and foot contact sensors can be replaced by a strain-gauge-based empirical leg configuration model. Lin et al.  [6] broadened that method by adding IMU measurements and EKF. Chilian et al.  [8] showed that it is possible to estimate a legged robot’s state by combining drift-affected IMU measurements with additional drift-free sensors. This includes measurements of joint positions and torques aided by a system based on stereovision. Reinstein and Hoffmann  [22] proposed an alternative method to reduce IMU bias and, moreover, address the foot slippage problem by using period-based measurement data indicator analysis. In  [23], [25] there is presented a framework for body pose estimation of quadruped that can be viewed as a simultaneous localization and mapping (SLAM) algorithm.

Localization, positioning and navigation tasks are significantly more complex in legged robotics than in wheeled mobile robotics, mainly because the motion is described by a larger number of DOF  [21], [6]. These tasks are difficult for 6 or 4 legged robots, and they are even more difficult in bipeds for which the balance of the robot becomes an important issue  [4].

While all of the above work concerned quadrupeds and hexapods, state estimation of a biped is relatively a new issue. Our experience demonstrates that precise velocity and attitude estimation is essential for bipedal gait control optimization with the use of reinforcement learning  [26], [27]. In  [28] there is proposed high-order sliding-mode observer for estimation of the absolute orientation of a 5-link biped. However, such approach requires a precise model of biped dynamics. In  [24] there is demonstrated a way to estimate humanoid trunk attitude during walking. This method is based on models developed from accelerometer and joint encoders combined with the IMU data using an EKF. Xinjilefu et al.  [29] proposed to decouple humanoid full-body state vector into base state and joint state vector. Then they used EKF to estimate these vectors. Decoupling allowed them to reduce computational cost of filtering. Another approach is presented in  [30] where the EKF-based estimator is presented for body pose estimation, using the fusion of leg odometry and IMU data. In the method proposed there the rotational constraints provided by the flat feet of the robot are incorporated into filter.

In this paper a method is proposed for legged robot velocity and tilt estimation based on a robot kinematics model, measurement data from IMU, digital encoders in servomotors, foot contact sensors, and Extended Kalman Filter. Aforementioned tilt provides information about attitude without unobservable global yaw. The method additionally estimates biases of the inertial sensor. In the experimental study, this method was applied to a customized inexpensive Bioloid biped robot. The proposed method can be used in any terrain, and it is independent of the robot design, the number of legs and the walking control strategy. It is robust to foot twists, understood as rotating of a foot about its center, and allows limited foot slippage, which is understood as the linear movement of a foot’s center. A minor contribution of this paper is a modification of the standard notation introduced by Denavit and Hartenberg  [31]. The modification allows to handle robot kinematics easily with simple tools.

The structure of this paper is as follows: Section  2 presents the formal problem description, the experimental setup and an overview of the sensory suite. Section  3 describes the notation used throughout the paper. Basic tools for the velocity estimation are presented in Section  4. Afterwards, in Section  5 sensor fusion using EKF is described. Experimental data analysis and discussions are given in Section  6. Finally, in Section  7 a brief summary of the results and suggestions for further work are proposed.

Section snippets

Experimental framework

Fig. 1 presents the customized Bioloid robot. A Bioloid’s1 body has 18 identical servomotors: 6 in each leg and 3 in each arm. The robot is 35 cm tall and weighs about 2 kg. An additional box attached to the robot’s back contains a small PC with Linux as well as IMU.2 Each foot is equipped with 4 contact sensors.

The problem

Notation

Discussion of the robot’s kinematics will be based on IMU frame defined in the previous section and a frame of the joint which is a coordinate frame attached to a joint.

Robot’s kinematics are usually discussed with the convention introduced by Denavit and Hartenberg  [31]. According to this convention, points only rotate about z-axes of appropriate frames. Defining these frames and appropriate rotation matrices are standard operations performed by any CAD tool for robot design. However, as

Dynamics of tilt and velocity in IMU frame

Let us consider a mobile IMU, the gravity vector, g, and velocity of the sensor, v, both in IMU frame. Suppose that in a time period of infinitesimal length δ>0 angular velocity of the sensor is constant, and equal to ω. Within the period, the gravity remains constant, but in IMU frame it is rotating with angular velocity of ω. Hence, within the period g changes togr(g,ωδ). Suppose within the period the sensor is moving with constant linear acceleration and finally it perceives acceleration,

Data fusion for tilt and velocity estimation

In this section, tools from the previous section and Extended Kalman Filter are combined to estimate the state of the robot’s inertial sensor. In order to apply EKF we need to define three entities: (i) state, (ii) the model of dynamics, and (iii) the model of observation.

Experimental results

To verify the methodology of velocity and tilt estimation, experiments were conducted using the customized Bioloid robot described in Section  2. The robot was supposed to walk along a straight line. During the experiments velocity estimation methods described in Sections 4–5 were used and compared to determine their virtues and drawbacks.

Conclusions and future work

In this paper a method was proposed for walking robot velocity and tilt estimation based on a leg kinematics model and measurement data from a low-cost Inertial Measurement Unit (IMU). The method applied Extended Kalman Filter to perform proprioceptive sensor data fusion. In the experimental study, this method was applied to a customized Bioloid biped robot.

The proposed method can be used in any terrain because it does not make any assumptions regarding orientation of the foot while it touches

Paweł Wawrzyński received his M.Sc. degree in computer science from Warsaw University of Technology in 2001, M.Sc. degree in economics from Warsaw University in 2004, and Ph.D. degree in computer science from Warsaw University of Technology in 2005. Since 2006 he has been working as an Assistant Professor at the Institute of Control and Computation Engineering in Warsaw, Poland. His research interests include robotics, neural networks, reinforcement learning, and cognitive science.

References (35)

  • P.-C. Lin et al.

    Sensor data fusion for body state estimation in a hexapod robot with dynamical gaits

    IEEE Trans. Robot.

    (2006)
  • D. Van der Lijn, G.A.D. Lopes, R. Babuska, Motion estimation based on predator/prey vision, in: IEEE/RSJ International...
  • A. Chilian, H. Hirschmuller, M. Gorner, Multisensor data fusion for robust pose estimation of a six-legged walking...
  • J.Z. Sasiadek et al.

    Low cost automation using INS/GPS data fusion for accurate positioning

    Robotica

    (2003)
  • B. Gassmann, F. Zacharias, J. Zollner, R. Dillmann, Localization of walking robots, in: 2005 IEEE International...
  • P.S. Maybeck
  • M. Kam et al.

    Sensor fusion for mobile robot navigation

    Proc. IEEE

    (1997)
  • Cited by (13)

    • Optimization-based legged odometry and sensor fusion for legged robot continuous localization

      2019, Robotics and Autonomous Systems
      Citation Excerpt :

      The sensor fusion is formulated as an optimization problem which combines information from the stereo SLAM and legged odometry. State estimation for a biped robot based on the Extended Kalman Filter, which integrates data from the inertial sensor, is proposed by Wawrzyński et al. [18] The estimation of the robot’s velocity is based on the inertial measurements and kinematic model of the supporting legs. A more advanced localization system is presented on the StarlETH — four-legged walking robot.

    • Velocity-Aided IMU-Based Tilt and Attitude Estimation

      2023, IEEE Transactions on Automatic Control
    • Research on Foot Slippage Suppression of Mammal Type Legged Robot based on Optimal Force Allocation

      2023, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    • Self-Balancing Vehicle Based on Adaptive Neuro-Fuzzy Inference System

      2022, Intelligent Automation and Soft Computing
    • Research on Foot Slippage Estimation of Insect Type Hexapod Robot

      2021, Proceedings of the 33rd Chinese Control and Decision Conference, CCDC 2021
    View all citing articles on Scopus

    Paweł Wawrzyński received his M.Sc. degree in computer science from Warsaw University of Technology in 2001, M.Sc. degree in economics from Warsaw University in 2004, and Ph.D. degree in computer science from Warsaw University of Technology in 2005. Since 2006 he has been working as an Assistant Professor at the Institute of Control and Computation Engineering in Warsaw, Poland. His research interests include robotics, neural networks, reinforcement learning, and cognitive science.

    Jakub Możaryn received M.Sc. degree in robotics in 2001 and Ph.D. degree in automatic control in 2011 from Warsaw University of Technology, Warsaw, Poland. Since 2010 he is a member of Cognitive Systems Group, Warsaw. Since 2011 he has been working as an Assistant Professor at the Institute of Automatic Control and Robotics, Warsaw University of Technology. His research interests include robotics, automatic control systems, neural networks and cognitive science.

    Jan Klimaszewski received his M.Sc. degree in 2007 from Warsaw University of Technology, Warsaw, Poland. Since then he has been a Ph.D. student with the Institute of Automatic Control and Robotics. His research interests include learning algorithms, image processing, and gait control in legged robots.

    View full text