Skip to main content

RGB-D Camera and 2D Laser Integration for Robot Navigation in Dynamic Environments

  • Conference paper
  • First Online:
Book cover Advances in Soft Computing (MICAI 2019)

Abstract

Navigation, localization, and mapping are challenging tasks that any mobile service robot needs to solve. Given that this type of robots generally navigate in 2D planar environments, a common and highly effective solution is laser-based mapping (SLAM) and navigation. Unfortunately, due to their incapability to detect obstacles outside of a single plane view, these algorithms are affected by irregular obstacles in the environment; even more when there are dynamic obstacles. To address this problem, we propose a method to integrate data from a 2D laser range finder (LRF) and an RGB-D camera. In this paper, our goal is to enrich a 2D grid-based map by extracting and processing a depth image from an RGB-D camera, fusing this with the information from the LRF. To test the algorithm, we set up five different scenarios in which pure laser navigation would be an ambitious task. Comparative results between pure LRF and LRF + RGB-D navigation are presented. In spite of the simplicity of the method, results show a significant improvement in the robot’s navigation, making it more robust in complex, dynamic environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    homer gui: https://gitlab.uni-koblenz.de/robbie/homer_gui.

  2. 2.

    homer mapping and navigation algorithm: https://gitlab.uni-koblenz.de/robbie/homer_mapnav.

  3. 3.

    Link to our official web page: www.robotic.inaoep.mx/~markovito.

  4. 4.

    The complete test run for small scattered obstacles scenario can be viewed at https://youtu.be/XomosxqqsvM.

  5. 5.

    Irregular shaped obstacles scenario test run available at https://youtu.be/OkTQYaHHaGM.

  6. 6.

    Footage of the robot in the dynamic human scenario at https://youtu.be/77IImkjriFw.

  7. 7.

    Demonstration video of walking in front robot scenario at https://youtu.be/mIdDDreSwyo.

  8. 8.

    Video of the robot navigating through a small entrance at https://youtu.be/qprdONPSrig.

References

  1. Budzan, S., Kasprzyk, J.: Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications. Opt. Lasers Eng. (2016). https://doi.org/10.1016/j.optlaseng.2015.09.003

    Article  Google Scholar 

  2. Filliat, D., et al.: RGBD object recognition and visual texture classification for indoor semantic mapping. In: 2012 IEEE Conference on Technologies for Practical Robot Applications, TePRA 2012 (2012). https://doi.org/10.1109/TePRA.2012.6215666

  3. Gonzalez, J., Ruiz, J.R., Galindo, C.: Improving 2D reactive navigators with kinect. In: Proceedings of the 10th International Conference on Informatics in Control, Automation and Robotics, pp. 393–400. SciTePress - Science and and Technology Publications (2013). https://doi.org/10.5220/0004485503930400, http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0004485503930400

  4. Iocchi, L., Pellegrini, S.: Building 3D maps with semantic elements integrating 2D laser, stereo vision and IMU on a mobile robot. In: Proceedings of the 2nd ISPRS International Workshop on 3D-ARCH (2007)

    Google Scholar 

  5. Kim, S., Kim, H., Yoo, W., Huh, K.: Sensor fusion algorithm design in detecting vehicles using laser scanner and stereo vision. IEEE Trans. Intell. Transp. Syst. (2016). https://doi.org/10.1109/TITS.2015.2493160

    Article  Google Scholar 

  6. Labayrade, R., Royere, C., Gruyer, D., Aubert, D.: Cooperative fusion for multi-obstacles detection with use of stereovision and laser scanner. Auton. Robots 19(2), 117–140 (2005). https://doi.org/10.1007/s10514-005-0611-7

    Article  Google Scholar 

  7. Lin, K.H., Chang, C.H., Dopfer, A., Wang, C.C.: Mapping and localization in 3D environments using a 2D laser scanner and a stereo camera. J. Inf. Sci. Eng. 28, 131–144 (2012)

    MathSciNet  Google Scholar 

  8. May, S., et al.: A generalized 2D and 3D multi-sensor data integration approach based on signed distance functions for multi-modal robotic mapping. In: 19th International Workshop on Vision, Modeling and Visualization. VMV 2014 (2014). https://doi.org/10.2312/vmv.20141281

  9. Moghadam, P., Wijesoma, W.S., Feng, D.J.: Improving path planning and mapping based on stereo vision and lidar. In: 2008 10th International Conference on Control, Automation, Robotics and Vision, ICARCV 2008 (2008). https://doi.org/10.1109/ICARCV.2008.4795550

  10. Nardi, F., Lázaro, M.T., Iocchi, L., Grisetti, G.: Generation of laser-quality 2D navigation maps from RGB-D sensors. In: Holz, D., Genter, K., Saad, M., von Stryk, O. (eds.) RoboCup 2018. LNCS (LNAI), vol. 11374, pp. 238–250. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27544-0_20

    Chapter  Google Scholar 

  11. Seib, V., Memmesheimer, R., Paulus, D.: A ROS-based system for an autonomous service robot. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 625, pp. 215–252. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-26054-9_9

    Chapter  Google Scholar 

  12. Wen, C., Qin, L., Zhu, Q., Wang, C., Li, J.: Three-dimensional indoor mobile mapping with fusion of two-dimensional laser scanner and RGB-D camera data. IEEE Geosci. Remote Sens. Lett. 11, 843–847 (2014). https://doi.org/10.1109/LGRS.2013.2279872

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Orlando Lara-Guzmán .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lara-Guzmán, O., Serrano, S.A., Carrillo-López, D., Sucar, L.E. (2019). RGB-D Camera and 2D Laser Integration for Robot Navigation in Dynamic Environments. In: Martínez-Villaseñor, L., Batyrshin, I., Marín-Hernández, A. (eds) Advances in Soft Computing. MICAI 2019. Lecture Notes in Computer Science(), vol 11835. Springer, Cham. https://doi.org/10.1007/978-3-030-33749-0_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33749-0_53

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33748-3

  • Online ISBN: 978-3-030-33749-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics