skip to main content
10.1145/3343031.3351079acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Visual-Inertial State Estimation with Pre-integration Correction for Robust Mobile Augmented Reality

Published: 15 October 2019 Publication History

Abstract

Mobile devices equipped with a monocular camera and an inertial measurement unit (IMU) are ideal platforms for augmented reality (AR) applications. However, nontrivial noises in low-cost IMUs, which are usually equipped in consumer-level mobile devices, could lead to large errors in pose estimation and in turn significantly degrade the user experience in mobile AR apps. In this study, we propose a novel monocular visual-inertial state estimation approach for robust and accurate pose estimation even for low-cost IMUs. The core of our method is an IMU pre-integration correction approach which effectively reduces the negative impact of IMU noises using the visual constraints in a sliding window and the kinematic constraint. We seamlessly integrate the IMU pre-integration correction module into a tightly-coupled,sliding-window based optimization framework for state estimation. Experimental results on public dataset EUROC demonstrate the superiority of our method to the state-of-the-art VINS-Mono in terms of smaller absolute trajectory errors (ATE) and relative pose errors (RPE). We further apply our method to real AR applications on two types of consumer-level mobile devices equipped with low-cost IMUs, i.e. an off-the-shelf smartphone and an AR glass. Experimental results demonstrate that our method can facilitate robust AR with little drifts on the two devices.

References

[1]
Michael Burri, Janosch Nikolic, Pascal Gohl, Thomas Schneider, Joern Rehder, Sammy Omari, Markus W Achtelik, and Roland Siegwart. 2016. The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research 35, 10 (2016), 1157--1163.
[2]
Jakob Engel, Vladlen Koltun, and Daniel Cremers. 2018. Direct sparse odometry. IEEE transactions on pattern analysis and machine intelligence 40, 3 (2018), 611-- 625.
[3]
Jakob Engel, Thomas Schöps, and Daniel Cremers. 2014. LSD-SLAM: Large-scale direct monocular SLAM. In European conference on computer vision. Springer, 834--849.
[4]
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza. 2015. IMU preintegration on manifold for efficient visual-inertial maximuma- posteriori estimation. Georgia Institute of Technology.
[5]
Christian Forster, Matia Pizzoli, and Davide Scaramuzza. 2014. SVO: Fast semidirect monocular visual odometry. In 2014 IEEE international conference on robotics and automation (ICRA). IEEE, 15--22.
[6]
Richard Hartley and Andrew Zisserman. 2003. Multiple view geometry in computer vision. Cambridge university press.
[7]
Joel A Hesch, Dimitrios G Kottas, Sean L Bowman, and Stergios I Roumeliotis. 2014. Consistency analysis and improvement of vision-aided inertial navigation. IEEE Transactions on Robotics 30, 1 (2014), 158--176.
[8]
Peter J Huber. 1992. Robust estimation of a location parameter. In Breakthroughs in statistics. Springer, 492--518.
[9]
Vadim Indelman, Stephen Williams, Michael Kaess, and Frank Dellaert. 2013. Information fusion in navigation systems via factor graph based incremental smoothing. Robotics and Autonomous Systems 61, 8 (2013), 721--738.
[10]
Eagle S Jones and Stefano Soatto. 2011. Visual-inertial navigation, mapping and localization: A scalable real-time causal approach. The International Journal of Robotics Research 30, 4 (2011), 407--430.
[11]
Jonathan Kelly and Gaurav S Sukhatme. 2011. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. The International Journal of Robotics Research 30, 1 (2011), 56--79.
[12]
Georg Klein and David Murray. 2007. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE Computer Society, 1--10.
[13]
Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart, and Paul Furgale. 2015. Keyframe-based visual--inertial odometry using nonlinear optimization. The International Journal of Robotics Research 34, 3 (2015), 314--334.
[14]
Mingyang Li and Anastasios I Mourikis. 2013. High-precision, consistent EKFbased visual-inertial odometry. The International Journal of Robotics Research 32, 6 (2013), 690--711.
[15]
Haomin Liu, Guofeng Zhang, and Hujun Bao. 2016. Robust keyframe-based monocular SLAM for augmented reality. In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 1--10.
[16]
Bruce D Lucas, Takeo Kanade, et al. 1981. An iterative image registration technique with an application to stereo vision. (1981).
[17]
Todd Lupton and Salah Sukkarieh. 2012. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Transactions on Robotics 28, 1 (2012), 61--76.
[18]
Anastasios I Mourikis and Stergios I Roumeliotis. 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 3565--3572.
[19]
Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. 2015. ORBSLAM: a versatile and accurate monocular SLAM system. IEEE transactions on robotics 31, 5 (2015), 1147--1163.
[20]
Tong Qin, Peiliang Li, and Shaojie Shen. 2018. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34, 4 (2018), 1004--1020.
[21]
Shaojie Shen, Nathan Michael, and Vijay Kumar. 2015. Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs. In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 5303--5310.
[22]
Jianbo Shi and Carlo Tomasi. 1993. Good features to track. Technical Report. Cornell University.
[23]
Jürgen Sturm, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cremers. 2012. A benchmark for the evaluation of RGB-D SLAM systems. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 573--580.
[24]
Stephan Weiss, Markus W Achtelik, Simon Lynen, Margarita Chli, and Roland Siegwart. 2012. Real-time onboard visual-inertial state estimation and selfcalibration of mavs in unknown environments. In 2012 IEEE International Conference on Robotics and Automation. IEEE, 957--964.
[25]
Xin Yang, Jiabin Guo, Tangli Xue, and Kwang-Ting Tim Cheng. 2018. Robust and real-time pose tracking for augmented reality on mobile devices. Multimedia Tools and Applications 77, 6 (2018), 6607--6628.
[26]
Xin Yang, Xun Si, Tangli Xue, Liheng Zhang, and Kwang-Ting Tim Cheng. 2015. Vision-inertial hybrid tracking for robust and efficient augmented reality on smartphones. In Proceedings of the 23rd ACM international conference on Multimedia. ACM, 1039--1042.

Cited By

View all
  • (2024)Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D ReconstructionSensors10.3390/s2407203324:7(2033)Online publication date: 22-Mar-2024
  • (2024)Online Path Description Learning Based on IMU Signals From IoT DevicesIEEE Transactions on Mobile Computing10.1109/TMC.2024.340643623:12(11889-11906)Online publication date: Dec-2024
  • (2024)5G MEC Computation Handoff for Mobile Augmented Reality2024 IEEE International Conference on Metaverse Computing, Networking, and Applications (MetaCom)10.1109/MetaCom62920.2024.00032(129-136)Online publication date: 12-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '19: Proceedings of the 27th ACM International Conference on Multimedia
October 2019
2794 pages
ISBN:9781450368896
DOI:10.1145/3343031
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 October 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. graph optimization
  2. mobile augmented reality
  3. pre-integration
  4. visual-inertial state estimation

Qualifiers

  • Research-article

Funding Sources

Conference

MM '19
Sponsor:

Acceptance Rates

MM '19 Paper Acceptance Rate 252 of 936 submissions, 27%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)18
  • Downloads (Last 6 weeks)1
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D ReconstructionSensors10.3390/s2407203324:7(2033)Online publication date: 22-Mar-2024
  • (2024)Online Path Description Learning Based on IMU Signals From IoT DevicesIEEE Transactions on Mobile Computing10.1109/TMC.2024.340643623:12(11889-11906)Online publication date: Dec-2024
  • (2024)5G MEC Computation Handoff for Mobile Augmented Reality2024 IEEE International Conference on Metaverse Computing, Networking, and Applications (MetaCom)10.1109/MetaCom62920.2024.00032(129-136)Online publication date: 12-Aug-2024
  • (2023)Full-body Human Motion Reconstruction with Sparse Joint Tracking Using Flexible SensorsACM Transactions on Multimedia Computing, Communications, and Applications10.1145/356470020:2(1-19)Online publication date: 25-Sep-2023
  • (2023)CR-LDSO: Direct Sparse LiDAR-Assisted Visual Odometry With Cloud ReusingIEEE Transactions on Multimedia10.1109/TMM.2023.325216125(9397-9409)Online publication date: 1-Jan-2023
  • (2022)IMU-Aided Precise Point Positioning Performance Assessment with Smartphones in GNSS-Degraded Urban EnvironmentsRemote Sensing10.3390/rs1418446914:18(4469)Online publication date: 7-Sep-2022
  • (2022)Positioning of Quadruped Robot Based on Tightly Coupled LiDAR Vision Inertial OdometerRemote Sensing10.3390/rs1412294514:12(2945)Online publication date: 20-Jun-2022
  • (2022)RGB-D DSO: Direct Sparse Odometry With RGB-D Cameras for Indoor ScenesIEEE Transactions on Multimedia10.1109/TMM.2021.311454624(4092-4101)Online publication date: 2022
  • (2021)Robust and Efficient RGB-D SLAM in Dynamic EnvironmentsIEEE Transactions on Multimedia10.1109/TMM.2020.303832323(4208-4219)Online publication date: 2021
  • (2020) D 2 VO: Monocular Deep Direct Visual Odometry 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS45743.2020.9341313(10158-10165)Online publication date: 24-Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media