Skip to main content

Visual Mapping in Light-Crowded Indoor Environments

  • Conference paper
  • First Online:
  • 4535 Accesses

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 302))

Abstract

Due to the recent success of affordable RGBD cameras, solutions to the Visual Simultaneous Localization and Mapping (VSLAM) problem has experienced a huge leap. To enable accurate mapping solutions, most of the proposed solutions expect static environments. Thinking of industrial applications, there is no guarantee for static environments. The SLAM algorithm has to cope with moving objects like human beings. We present an approach to detect moving objects in RGBD camera images. The approach is based on point cloud and image filtering techniques. We present test results using publicly available datasets. We further show the performance and influence of the algorithm on mapping and on the accuracy of a visual SLAM system.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   349.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   449.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.iti.uni-luebeck.de/navigation.

  2. 2.

    http://www.mobilerobots.com/ResearchRobots/PeopleBot.aspx.

References

  1. Lowe, D.: Object recognition from local scale-invariant features. In: 7th IEEE International Conference on Computer Vision (ICCV). (1999).

    Google Scholar 

  2. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding (2008).

    Google Scholar 

  3. Ozuysal, M., Calonder, M., Lepetit, V., Fua, P.: Fast keypoint recognition using random ferns. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2010).

    Google Scholar 

  4. Hartmann, J., Kluessendorff, J.H., Maehle, E.: A Unified Visual Graph-Based Approach to Navigation for Wheeled Mobile Robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (2013).

    Google Scholar 

  5. Kluessendorff, J.H., Hartmann, J., Forouher, D., Maehle, E.: Graph-Based Visual SLAM and Visual Odometry using an RGB-D Camera. In: 9th International Workshop on Robot Motion and Control (RoMoCo). (2013).

    Google Scholar 

  6. Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An Evaluation of the RGB-D SLAM System. In: Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA). (2012).

    Google Scholar 

  7. Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. The International Journal of Robotics Research (2012).

    Google Scholar 

  8. Grisetti, G., Stachniss, C., Burgard, W.: Nonlinear Constraint Network Optimization for Efficient Map Learning. IEEE Transactions on Intelligent Transportation Systems (2009).

    Google Scholar 

  9. Stückler, J., Behnke, S.: Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking. In: Journal of Visual Communication and Image Representation. (2013).

    Google Scholar 

  10. Choi, W., Pantofaru, C., Savarese, S.: Detecting and tracking people using an RGB-D camera via multiple detector fusion. In: IEEE International Conference on Computer Vision Workshops ICCV Workshops. (2011).

    Google Scholar 

  11. Munaro, M., Basso, F., Menegatti, E.: Tracking people within groups with RGB-D data. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2012).

    Google Scholar 

  12. Zhang, H., Reardon, C., Parker, L.E.: Real-time multiple human perception with color-depth cameras on a mobile robot. IEEE Transactions on Cybernetics (2013).

    Google Scholar 

  13. Kuemmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: G2o: A general framework for graph optimization. In: IEEE International Conference on Robotics and Automation (ICRA). (2011).

    Google Scholar 

  14. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: Binary Robust Independent Elementary Features. In: Computer Vision ECCV 2010. Lecture Notes in Computer Science. Springer, Berlin / Heidelberg (2010).

    Google Scholar 

  15. Hartmann, J., Kluessendorff, J.H., Maehle, E.: A Comparison of Feature Descriptors for Visual SLAM. In: 6th European Conference on Mobile Robots (ECMR). (2013).

    Google Scholar 

  16. Holz, D., Trevor, A.J.B., Dixon, M., Gedikli, S., Rusu, B.: Fast segmentation of RGB-D images for semantic scene understanding. In: ICRA Workshop on Semantic Perception and Mapping for Knowledge-enabled Service Robotics. (2012).

    Google Scholar 

  17. Linán, C.C.: cvBlob. http://cvblob.googlecode.com

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Helge Klüssendorff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Klüssendorff, J.H., Ehlers, K., Maehle, E. (2016). Visual Mapping in Light-Crowded Indoor Environments. In: Menegatti, E., Michael, N., Berns, K., Yamaguchi, H. (eds) Intelligent Autonomous Systems 13. Advances in Intelligent Systems and Computing, vol 302. Springer, Cham. https://doi.org/10.1007/978-3-319-08338-4_66

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08338-4_66

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08337-7

  • Online ISBN: 978-3-319-08338-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics