Skip to main content

SLAM Methods for Augmented Reality Systems for Flight Simulators

  • Conference paper
  • First Online:
Computational Science – ICCS 2023 (ICCS 2023)

Abstract

In this paper, we present the review and practical evaluation of the flight simulators of Simultaneous Localization and Mapping methods. We present a review of recent research and development in the SLAM application in a wide range of domains, like autonomous driving, robotics and augmented reality (AR). Then we focus on the methods selected from the perspective of their usefulness in the AR systems for training and servicing the flight simulators. The localization and mapping in such an environment are much more complex than in others since the flight simulator is relatively small and close area. Our previous experiments showed that the built-in SLAM system in HoloLens is insufficient for such areas and has to be enhanced with additional elements, like QR codes. Therefore, the presented study on other methods can improve the localization and mapping of AR systems in flight simulators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)

    Article  Google Scholar 

  2. Chen, Y.: Algorithms for simultaneous localization and mapping, vol. 3, pp. 1–15, February 2013

    Google Scholar 

  3. Bresson, G., Alsayed, Z., Yu, L., Glaser, S.: Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2(3), 194–220 (2017)

    Article  Google Scholar 

  4. Nava, Y.: Visual-LiDAR SLAM with loop closure. PhD thesis, KTH Royal Institute of Technology (2018)

    Google Scholar 

  5. Sun, T., Liu, Y., Wang, Y., Xiao, Z.: An improved monocular visual-inertial navigation system. IEEE Sens. J. 21(10), 11728–11739 (2020)

    Article  Google Scholar 

  6. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: 2011 International Conference on Computer Vision, pp. 2320–2327. IEEE (2011)

    Google Scholar 

  7. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_54

    Chapter  Google Scholar 

  8. Berkvens, R., Vandermeulen, D., Vercauteren, C., Peremans, H., Weyn, M.: Feasibility of geomagnetic localization and geomagnetic RatSLAM. Int. J. Adv. Syst. Meas. 7(1–2), 44–56 (2014)

    Google Scholar 

  9. Newcombe, R.A., et al.: Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127–136. IEEE (2011)

    Google Scholar 

  10. Meng, X., Gao, W., Hu, Z.: Dense RGB-D SLAM with multiple cameras. Sensors 18(7), 2118 (2018)

    Article  Google Scholar 

  11. Wang, S., Clark, R., Wen, H., Trigoni, N.: DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2043–2050. IEEE (2017)

    Google Scholar 

  12. Mohamed, S.A., Haghbayan, M.-H., Westerlund, T., Heikkonen, J., Tenhunen, H., Plosila, J.: A survey on odometry for autonomous navigation systems. IEEE Access 7, 97466–97486 (2019)

    Article  Google Scholar 

  13. Karam, S., Lehtola, V., Vosselman, G.: Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 42, 149–156 (2019)

    Article  Google Scholar 

  14. R. Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: G\(^{2}\)o: a general framework for graph optimization. In: 2011 IEEE International Conference on Robotics and Automation, pp. 3607–3613. IEEE (2011)

    Google Scholar 

  15. Deschaud, J.-E.: IMLS-SLAM: scan-to-model matching based on 3D data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2480–2485. IEEE (2018)

    Google Scholar 

  16. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp. 2564–2571. IEEE (2011)

    Google Scholar 

  17. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008)

    Article  Google Scholar 

  18. Chi, H.C., Tsai, T.H., Chen, S.Y.: Slam-based augmented reality system in interactive exhibition. In: 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), pp. 258–262. IEEE (2020)

    Google Scholar 

  19. Azuma, R.T.: The most important challenge facing augmented reality. Presence 25(3), 234–238 (2016)

    Article  Google Scholar 

  20. Zhang, Z., Shu, M., Wang, Z., Wang, H., Wang, X.: A registration method for augmented reality system based on visual slam. In: 2019 International Conference on Electronic Engineering and Informatics (EEI), pp. 408–411. IEEE (2019)

    Google Scholar 

  21. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular slam system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  22. Liu, H., Chen, M., Zhang, G., Bao, H., Bao, Y.: ICE-BA: incremental, consistent and efficient bundle adjustment for visual-inertial slam. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1974–1982 (2018)

    Google Scholar 

  23. Cyrus, J., Krcmarik, D., Moezzi, R., Koci, J., Petru, M.: Hololens used for precise position tracking of the third party devices-autonomous vehicles. Commun.-Sci. Lett. Univ. Zilina 21(2), 18–23 (2019)

    Google Scholar 

  24. Hoffman, M.A.: Microsoft hololens development edition. Science 353(6302), 876–876 (2016)

    Google Scholar 

  25. Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Real-time 3D reconstruction at scale using voxel hashing. ACM Trans. Graph. (ToG) 32(6), 1–11 (2013)

    Article  Google Scholar 

  26. Glocker, B., Shotton, J., Criminisi, A., Izadi, S.: Real-time RGB-D camera relocalization via randomized ferns for keyframe encoding. IEEE Trans. Vis. Comput. Graph. 21(5), 571–583 (2014)

    Article  Google Scholar 

  27. Skurowski, P., Nurzyńska, K., Pawlyta, M., Cyran, K.A.: Performance of QR code detectors near Nyquist limits. Sensors 22, 7230 (2022)

    Article  Google Scholar 

  28. Cheng, J., Zhang, L., Chen, Q., Hu, X., Cai, J.: A review of visual slam methods for autonomous driving vehicles. Eng. Appl. Artif. Intell. 114, 104992 (2022)

    Article  Google Scholar 

  29. Juneja, A., Bhandari, L., Mohammadbagherpoor, H., Singh, A., Grant, E.: A comparative study of slam algorithms for indoor navigation of autonomous wheelchairs. In: 2019 IEEE International Conference on Cyborg and Bionic Systems (CBS), pp. 261–266. IEEE (2019)

    Google Scholar 

  30. Zou, Q., Sun, Q., Chen, L., Nie, B., Li, Q.: A comparative analysis of lidar slam-based indoor navigation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 23(7), 6907–6921 (2021)

    Article  Google Scholar 

  31. Khan, M.U., Zaidi, S.A.A., Ishtiaq, A., Bukhari, S.U.R., Samer, S., Farman, A.: A comparative survey of lidar-slam and lidar based sensor technologies. In: 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC), pp. 1–8. IEEE (2021)

    Google Scholar 

  32. Zhou, X., Huang, R.: A state-of-the-art review on SLAM. In: Intelligent Robotics and Applications. ICIRA 2022. LNCS, vol. 13457, pp. 240–251. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13835-5_22

  33. Klose, S., Heise, P., Knoll, A.: Efficient compositional approaches for real-time robust direct visual odometry from RGB-D data. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1100–1106. IEEE (2013)

    Google Scholar 

  34. Gao, X., Wang, R., Demmel, N., Cremers, D.: LDSO: direct sparse odometry with loop closure. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2198–2204. IEEE (2018)

    Google Scholar 

  35. Dai, W., Zhang, Y., Li, P., Fang, Z., Scherer, S.: RGB-D SLAM in dynamic environments using point correlations. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 373–389 (2020)

    Article  Google Scholar 

  36. Kiss-Illés, D., Barrado, C., Salamí, E.: GPS-SLAM: an augmentation of the ORB-SLAM algorithm. Sensors 19(22), 4973 (2019)

    Article  Google Scholar 

  37. Cai, L., Ye, Y., Gao, X., Li, Z., Zhang, C.: An improved visual slam based on affine transformation for orb feature extraction. Optik 227, 165421 (2021)

    Article  Google Scholar 

  38. Bescos, B., Fácil, J.M., Civera, J., Neira, J.: DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 3(4), 4076–4083 (2018)

    Article  Google Scholar 

  39. Cheng, J., Sun, Y., Meng, M.Q.-H.: Improving monocular visual slam in dynamic environments: an optical-flow-based approach. Adv. Robot. 33(12), 576–589 (2019)

    Article  Google Scholar 

  40. Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 15–22. IEEE (2014)

    Google Scholar 

  41. Bergmann, P., Wang, R., Cremers, D.: Online photometric calibration of auto exposure video for realtime visual odometry and slam. IEEE Robot. Autom. Lett. 3(2), 627–634 (2017)

    Article  Google Scholar 

  42. Liu, P., Yuan, X., Zhang, C., Song, Y., Liu, C., Li, Z.: Real-time photometric calibrated monocular direct visual slam. Sensors 19(16), 3604 (2019)

    Article  Google Scholar 

  43. Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020 (2018)

    Article  Google Scholar 

  44. Weiss, S., Achtelik, M.W., Lynen, S., Chli, M., Siegwart, R.: Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments. In: 2012 IEEE International Conference on Robotics and Automation, pp. 957–964. IEEE (2012)

    Google Scholar 

  45. Yin, H., Li, S., Tao, Y., Guo, J., Huang, B.: Dynam-SLAM: an accurate, robust stereo visual-inertial SLAM method in dynamic environments. IEEE Trans. Robot. (2022)

    Google Scholar 

  46. Cheng, Q., Zhang, S., Bo, S., Chen, D., Zhang, H.: Augmented reality dynamic image recognition technology based on deep learning algorithm. IEEE Access 8, 137370–137384 (2020)

    Article  Google Scholar 

  47. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062 (2014)

  48. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  49. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  50. Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858 (2017)

    Google Scholar 

  51. Gao, X., Zhang, T.: Unsupervised learning to detect loops using deep neural networks for visual SLAM system. Auton. Robot. 41, 1–18 (2017)

    Article  Google Scholar 

  52. Geng, M., Shang, S., Ding, B., Wang, H., Zhang, P.: Unsupervised learning-based depth estimation-aided visual slam approach. Circuits Syst. Signal Process. 39, 543–570 (2020)

    Article  Google Scholar 

  53. Li, F., et al.: A mobile robot visual slam system with enhanced semantics segmentation. IEEE Access 8, 25442–25458 (2020)

    Article  Google Scholar 

  54. Zhang, L., Wei, L., Shen, P., Wei, W., Zhu, G., Song, J.: Semantic SLAM based on object detection and improved octomap. IEEE Access 6, 75545–75559 (2018)

    Article  Google Scholar 

  55. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement, arXiv preprint arXiv:1804.02767 (2018)

  56. Tateno, K., Tombari, F., Laina, I., Navab, N.: CNN-SLAM: real-time dense monocular slam with learned depth prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6243–6252 (2017)

    Google Scholar 

  57. Li, R., Wang, S., Long, Z., Gu, D.: UnDeepVO: monocular visual odometry through unsupervised deep learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7286–7291. IEEE (2018)

    Google Scholar 

  58. Vijayanarasimhan, S., Ricco, S., Schmid, C., Sukthankar, R., Fragkiadaki, K.: SFM-NET: learning of structure and motion from video, arXiv preprint arXiv:1704.07804 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Onyeka J. Nwobodo .

Editor information

Editors and Affiliations

Additional information

The authors would like to acknowledge that this paper has been written based on the results achieved within the WrightBroS project. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska Curie grant agreement No 822483. Disclaimer: The paper reflects only the author’s view, and the Research Executive Agency (REA) is not responsible for any use that may be made of the information it contains.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nwobodo, O.J., Wereszczyński, K., Cyran, K. (2023). SLAM Methods for Augmented Reality Systems for Flight Simulators. In: Mikyška, J., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational Science – ICCS 2023. ICCS 2023. Lecture Notes in Computer Science, vol 14073. Springer, Cham. https://doi.org/10.1007/978-3-031-35995-8_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35995-8_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35994-1

  • Online ISBN: 978-3-031-35995-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics