Compass aided visual-inertial odometry

https://doi.org/10.1016/j.jvcir.2018.12.029Get rights and content

Abstract

With the development of vision and optimization techniques, visual-inertial odometry (VIO) has shown the capability of motion estimating in the GNSS-denied condition. The VIO can provide absolute pitch and roll angles estimating value, but no the absolute azimuth. In the paper, we proposed a VIO aided by compass, which can obtain the azimuth with respect to the north direction in the geographic frame. Moreover, aided by compass, the yaw angle estimating error was reduced to a greater degree, due to the measurement of azimuth. Furthermore, the consistency of the VIO backend estimator is improved as well, while the accuracy of the estimated pose states was also wholly improved. The aiding approach is a tightly-couple information fusion system of camera, IMU and magnetoresistive sensors. The optimization method is based on the pre-integration and bundle adjustment. In the paper, we derived the compass residual model based on the pre-integration model, and then its Jacobian and covariance formation were deduced to solve the nonlinear equations. The compass aided VIO software was implemented based on the Nvidia Jetson Tx2. The system was fully tested based on hardware-in-the-loop simulation and vehicle test in the real physical environment. The pose errors of VIOs with and without compass aiding were compared in the above tests. The simulation results showed that the position was and yaw errors were improved obviously; the compass aided VIO was still consistent, but the pure VIO was consistent not. The consistency character is evaluated by average NEES by Monte-Carlo in simulation. The vehicle test showed that the position error was reduced by 23%; the yaw error was reduced by 21%. As a result, the compass aided VIO not only improved the pose estimated accuracy, especially position and yaw, but also improved the consistency of VIO system.

Introduction

Visual odometry (VO) uses the feature matching information between successive image sequences to estimate the position and orientation increment value in real time, which has been gradually applied to the robot navigation system in the condition of GNSS-denied environment [1]. However, the performance of VO is dependent on the illumination and texture of the scene. Strapdown Inertial Navigation System (SINS) has been used as a standard configuration for any navigation system [2], [3], which is the base system for general applications, such as plane, UAV, vehicle marine and smart phone. The adaptability of SINS is better than other navigation systems due to ego-motion measured by IMU. However, the main error source of SINS is the IMU which is divergent with time. For example, the position error is proportional to cubic of travel time and the scale factor is the gyro bias. As a result, SINS with low precision grade IMU cannot be used independently, it must be integrated with other sensors, such as GNSS receiver and VO. The accuracy of VO mainly depends on the resolution of the camera. It can be demonstrated that camera with millions of pixels is enough providing higher accuracy than SINS with consumer degrade IMU. Therefore, the fusion of VO and INS system is able to improve the robustness and precision of navigation system, which can be applied to motion estimation in GPS-denied conditions, such as indoor environments and urban canyons.

The motion estimation system which is integrated of VO and INS is called visual-inertial odometry [44], [45], [46]. Generally, it is divided into three parts according to its function: VO frontend, SINS calculating and VIO backend. The VO frontend is used for image feature extraction and matching as well as pose calculation. In accordance with feature extraction and matching, VO is usually divided into two types—direct method [4], [5] and indirect method [6], [7]. The pose calculation method is classified according to monocular vision and binocular vision. The main methods are: Nister’s 5-point method for monocular vision, Iterative Closest Point (ICP) for binocular vision and Perspective-n-Points (PnP) for the both types [8]. The SINS calculate the position, velocity and orientation of the vehicle with respect to the navigation reference coordinate system based on the specific force and angular velocity measured by the IMU [9]. The VIO backend is used for motion state and sensor error estimating [47], [48]. The main methods are extend Kalman filtering [10], sliding window smoothing estimator [7] and recursive global smoothing estimator [12], etc.

The system integrated IMU and magnetic compass often called Dead-Reckoning system, which has been widely used in motion state estimation in GPS-denied conditions [11], overcoming the error divergence problem of pure inertial navigation systems [13], [14]. The inertial navigation system is able to provide global measurements of the pitch and roll angles referring to the gravitation vector, while the yaw angle measurement is local. However, compass can provide the absolute azimuth with reference to the geomagnetic vector. Therefore, the Dead-Reckoning system make the yaw direction of SINS global observable. There are many benefits by the fusion of the compass and VIO. Firstly, it provides azimuth of earth frame instead of yaw angle of ground frame. Secondly, where the environment is low illumination, texture duplication or motion blur, compass aided VIO system will still work. Lastly, the global observability of the backend estimator is changed from 2 to 3 by aiding. The yaw direction is observable, which can overcome inconsistency of the estimator to avoid the system degenerating to suboptimal.

The main contributions in this paper are summarize: (1) The system scheme and computing framework of compass aided VIO aided are presented; (2) The minimum cost function of the compass aided VIO is deduced based on the pre-integration theory, while its Jacobian and covariance iteration formation are derived as well. (3) The hardware-in-the-loop simulation platform is implemented based on Airsim for comparing the performance of compass aided VIO and the classic VIO.

Section snippets

Coordinate frame

The compass aided VIO motion estimation system contains four kinds of sensors, which can be classified into external information sensors and ego-motion sensors according to the sources of measured information. The external information sensors are cameras and geomagnetic sensors, which are used to acquire motion scene images and measure geomagnetic field intensity respectively, and the ego-motion sensor is IMU, comprised of gyroscope and accelerometer, which are used to measure the vehicle

SINS and IMU

In the SINS, Inertial Measurement Unit (IMU) is used to measure the angular velocity and the specific force of the vehicle respectively, and calculates position, velocity and orientation of vehicle. IMU is consisted of accelerometer and gyroscope. Accelerometer measures specific force af, and the definition of specific force is the resultant acceleration except gravitational acceleration. In NED frame, accelerometer measures the Coriolis acceleration adapted from earth rotation, ωe × v.

The Front-End of visual inertial odometry

The visual inertial odometry front-end is derived from visual odometry. The main functions are these: image features extraction and matching, land marks sifting, calculation of pose increments, and triangulation, shown in Fig. 3. The camera sensor can be either monocular or binocular.

The algorithm of VIO feature extraction and matching is expected to be of high accuracy, robustness, and repeatability. With the development of image processing and computer vision technology, Harris [23], FAST [24]

The backend of visual inertial odometry

The backend of visual Inertial Odometry is used to optimize the motion state and IMU error. In our project, according to source of the measurement value for objective function, estimator is divided to three parts: bundle adjustment, IMU pre-integration and the variation in yaw direction.

Simulation platform

Compass aided VIO system is tested by the Airsim [23]. It is a simulator for drones, cars and more built on Unreal Engine developed by Microsoft. The simulation system is consisted of the simulation computer and navigation on-board computer. The simulation computer based on Airsim calculates UAV dynamic model and simulates the avionic sensor information. The Airsim supports hardware-in-loop with flight controllers PIXHAWK [14] for physically and visually realistic simulations. And in the VIO

Result of VIO inconsistency

In the process of solving nonlinear optimization problem, Jacobian is updated at different estimating states during iteration will cause the yaw direction observable which is unobservable originally in theory. Compass provides global observability for the yaw direction, the logic for inconsistency of VIO is invalid. Consistency of estimation system is evaluated by average Normalized Estimation Error Squared (NEES) [47], shown in (40). In (40), ε represents the error of position and orientation

The configuration of the vehicle test system

For testing the accuracy of the compass aided VIO, the devices of compass aided VIO are installed on the testing vehicle, including cameras, magnetoresistive sensors and IMU. For evaluating the accuracy, the precision of integrated navigation system is taken for comparing, which is consisted of optic-fiber gyroscopes and GPS receiver. The integrated navigation system we adopted is SPAN-CPT produced by NovAtel. The position precision can attain to 1 m (horizon) and 0.6 m (vertical). The Euler

The process of vehicle test

The device of compass aided VIO and test reference device SPAN-CPT are installed on the top of the test vehicle. Firstly, the SPAN-CPT start to initial alignment. When the visible satellites is more than 4 for the GPS receiver, the test vehicle begin to drive. Once the software of the ground software suggested that “alignment completed”, the vehicle stop to wait for VIO starting. Secondly, the compass aided VIO begin to initialize. The initialization process according to accelerometer and

Result

The distance for driving to evaluate the performance of the compass aided VIO is 5 km. The place of the test in the district of CIOMP, where the position of LLA coordinate frame is (43.849092°, 125.401490°, 2.22 m). For convenience of calculating the position error, the position of LLA coordinate frame is converted to ground coordinate frame. The position, Euler angle and speed errors of the both systems are compared in the chapter to analyze the performance of the compass aided VIO. The

Conclusion

In this paper, a method of compass aided VIO has been demonstrated and the motion estimation system with tightly coupled by the sensors of magnetoresistive sensor, IMU and camera is established. Firstly, the calculation method of magnetic heading is introduced and the design process of the front-end of the visual odometry is summarized. Then, based on the sliding window smoothing estimator, the objective function of the yaw angle with Compass and its Jacobian calculation form were deduced.

Conflict of interest

There is no conflict of interest.

Acknowledgement

The authors are grateful for the comments and suggestions of the reviewers and the Editor that helped to improve the paper significantly.

References (41)

  • D. Titterton et al.

    Strapdown inertial navigation technology

    IEEE Aerosp. Electron. Syst. Mag.

    (2005)
  • J. Engel et al.

    LSD-SLAM: Large-scale direct monocular SLAM

  • V. Usenko et al.

    Direct visual-inertial odometry with stereo cameras

  • R. Mur-Arta et al.

    ORB-SLAM: A versatile and accurate monocular SLAM system

    IEEE Trans. Rob.

    (2017)
  • S. Leutenegger et al.

    Keyframe-based visual-inertial odometry using nonlinear optimization

    Int. J. Rob. Res.

    (2015)
  • D. Scaramuzza et al.

    Visual odometry: Part I: the first 30 years and fundamentals

    IEEE Rob. Autom. Mag.

    (2011)
  • P.G. Savage

    Strapdown inertial navigation integration algorithm design Part 1: attitude algorithms

    J. Dyn. Syst. Meas. Control.

    (1998)
  • A.I. Mourikis et al.

    A multi-state constraint Kalman filter for vision-aided inertial navigation

  • M. Kaess et al.

    iSAM2: Incremental smoothing and mapping using the bayes tree

    Int. J. Rob. Res.

    (2012)
  • R. Mahony et al.

    Nonlinear complementary filters on the special orthogonal group

    IEEE Trans. Autom. Control

    (2008)
  • This article is part of the Special Issue on TIUSM.

    View full text