Skip to main content

ConSLAM: Periodically Collected Real-World Construction Dataset for SLAM and Progress Monitoring

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Abstract

Hand-held scanners are progressively adopted to workflows on construction sites. Yet, they suffer from accuracy problems, preventing them from deployment for demanding use cases. In this paper, we present a real-world dataset collected periodically on a construction site to measure the accuracy of SLAM algorithms that mobile scanners utilize. The dataset contains time-synchronised and spatially registered images and LiDAR scans, inertial data and professional ground-truth scans. To the best of our knowledge, this is the first publicly available dataset which reflects the periodic need of scanning construction sites with the aim of accurate progress monitoring using a hand-held scanner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://www.cvlibs.net/datasets/kitti.

  2. 2.

    Please consult www.hesaitech.com.

  3. 3.

    https://www.blender.org/.

  4. 4.

    https://hilti-challenge.com/dataset-2021.html.

  5. 5.

    https://hilti-challenge.com/dataset-2022.html.

  6. 6.

    https://leica-geosystems.com/products/laser-scanners/scanners/leica-rtc360.

  7. 7.

    https://github.com/ros-drivers/velodyne.

  8. 8.

    see https://github.com/chennuo0125-HIT/lidar_imu_calib, and https://blog.csdn.net/weixin_37835423/article/details/110672571.

  9. 9.

    https://www.ros.org.

  10. 10.

    https://wiki.ros.org/message_filters/ApproximateTime.

  11. 11.

    We used the distance-based downsampling implemented in CloudCompare 2.12.2. See https://www.cloudcompare.org.

  12. 12.

    https://github.com/HKUST-Aerial-Robotics/A-LOAM.

  13. 13.

    https://wiki.ros.org/rosbag.

  14. 14.

    https://opencv.org.

References

  1. Behley, J., et al.: SemanticKITTI: a dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9297–9307 (2019)

    Google Scholar 

  2. Beltrán, J., Guindel, C., de la Escalera, A., García, F.: Automatic extrinsic calibration method for lidar and camera sensor setups. IEEE Trans. Intell. Transp. Syst. (2022). https://doi.org/10.1109/TITS.2022.3155228

  3. Brown, D.: Decentering distortion of lenses. In: Photogrammetric Engineering, pp. 444–462 (1966)

    Google Scholar 

  4. Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  5. Chang, M.F., et al.: Argoverse: 3D tracking and forecasting with rich maps. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8748–8757 (2019)

    Google Scholar 

  6. Fritsch, J., Kuehnl, T., Geiger, A.: A new performance measure and evaluation benchmark for road detection algorithms. In: 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), pp. 1693–1700. IEEE (2013)

    Google Scholar 

  7. Gao, B., Pan, Y., Li, C., Geng, S., Zhao, H.: Are we hungry for 3D lidar data for semantic segmentation? A survey and experimental study. arXiv preprint arXiv:2006.04307 (2020)

  8. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  9. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  10. Griffiths, D., Boehm, J.: SynthCity: a large scale synthetic point cloud. arXiv preprint arXiv:1907.04758 (2019)

  11. Haarbach, A.: Multiview ICP. http://www.adrian-haarbach.de/mv-lm-icp/docs/mv-lm-icp.pdf

  12. Helmberger, M., et al.: The Hilti SLAM challenge dataset. arXiv preprint arXiv:2109.11316 (2021)

  13. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3061–3070 (2015)

    Google Scholar 

  14. Pan, Y., Gao, B., Mei, J., Geng, S., Li, C., Zhao, H.: SemanticPOSS: a point cloud dataset with large quantity of dynamic instances. In: 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 687–693. IEEE (2020)

    Google Scholar 

  15. Richter, S.R., Vineet, V., Roth, S., Koltun, V.: Playing for data: ground truth from computer games. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 102–118. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_7

    Chapter  Google Scholar 

  16. Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., Daniela, R.: LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5135–5142. IEEE (2020)

    Google Scholar 

  17. Zhang, J., Singh, S.: LOAM: lidar odometry and mapping in real-time. In: Proceedings of Robotics: Science and Systems, Berkeley, USA (2014). https://doi.org/10.15607/RSS.2014.X.007

Download references

Acknowledgment

The authors would like to thank Laing O’Rourke for allowing access to their construction site and for collecting the ground truth scans. We also acknowledge Romain Carriquiry-Borchiari of Ubisoft France for his help with rendering some of the figures. This work is supported by the EU Horizon 2020 BIM2TWIN: Optimal Construction Management & Production Control project under an agreement No. 958398. The first author would also like to thank BP, GeoSLAM, Laing O’Rourke, Topcon and Trimble for sponsoring his studentship funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Trzeciak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Trzeciak, M. et al. (2023). ConSLAM: Periodically Collected Real-World Construction Dataset for SLAM and Progress Monitoring. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13807. Springer, Cham. https://doi.org/10.1007/978-3-031-25082-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25082-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25081-1

  • Online ISBN: 978-3-031-25082-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics