Skip to main content
Log in

PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

A Publisher Correction to this article was published on 20 June 2022

This article has been updated

Abstract

This paper presents a challenging panoramic vision and LiDAR dataset collected by an autonomous vehicle at Chungbuk National University campus to facilitate robotics research. The vehicle is equipped with a Point Grey Ladybug 3 camera, 3D-LiDAR, global positioning system (GPS) and inertial measurement unit (IMU). The data are collected while driving in an outdoor environment, which includes various scenes such as parking lot, semi-off road path and the campus road scene with traffic. The data from all sensors mounted on the vehicle are timely registered and synchronized. The dataset includes point clouds from 3D LiDAR, images, GPS and IMU measurement. The vision data contain multiple fisheye images covering 360 field-of-view from individual cameras of Ladybug3 at high resolution and accurately stitched spherical panoramic images. The availability of multiple-fisheye and accurate panoramic images may be used for development and validation of novel multi-fisheye, panoramic, and 3D LiDAR based simultaneous localization and mapping (SLAM) systems. The dataset is collected to target various applications such as odometry, SLAM, loop closure detection, deep learning based algorithms with vision, inertial, LiDAR and fusion of visual, inertial and 3D information. To evaluate the algorithm, high accuracy RTK GPS measurements are provided for testing and evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Change history

References

  1. Yu DA (2019) Grid based spherical CNN for object detection from panoramic images. Sensors 19:2622

  2. Wang D, He Y, Liu Y, Li D, Wu S, Qin Y, Xu Z (2019) 3D object detection algorithm for panoramic images with multi-scale convolutional neural network. IEEE Access 7:171461–171470

    Article  Google Scholar 

  3. Ji S, Qin Z, Shan J, Lu M (2020) Panoramic SLAM from a multiple fisheye camera rig. ISPRS J Photogram Remote Sens 159:169–183

    Article  Google Scholar 

  4. Yang Y, Tang D, Wang D, Song W, Wang J, Fu M (2020) Multi-camera visual SLAM for off-road navigation. Robot Auton Syst 128:103505

    Article  Google Scholar 

  5. Won C, Seok H, Cui Z, Pollefeys M, Lim J (2020) “Omnislam: omnidirectional localization and dense mapping for wide-baseline multi-camera systems. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp 559–566

  6. Wang Y, Cai S, Li SJ, Liu Y, Guo Y, Li T (2018) Cubemapslam: a piecewise-pinhole monocular fisheye slam system. In: Asian Conference on Computer Vision, pp 34–49

  7. Urban S, Hinz S (2016) Multicol-slam-a modular real-time multi-camera slam system. arXiv preprint arXiv:1610.07336

  8. Caruso D, Engel J, Cremers D (2015) Large-scale direct slam for omnidirectional cameras. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 141–148

  9. Liu P, Heng L, Sattler T, Geiger A, Pollefeys M (2017) Direct visual odometry for a Fisheye-Stereo camera. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1746–1752

  10. Seok H, Lim J (2019) Rovo: robust omnidirectional visual odometry for wide-baseline wide-FOV camera systems. In: 2019 International Conference on Robotics and Automation (ICRA), pp 6344–6350

  11. Jaramillo C, Yang L, Munoz JP, Taguchi Y, Xiao J (2019) Visual odometry with a single-camera stereo omnidirectional system. Mach Vis Appl 30:1145–1155

    Article  Google Scholar 

  12. Matsuki H, von Stumberg L, Usenko V, Stückler J, Cremers D (2018) Omnidirectional DSO: direct sparse odometry with fisheye cameras. IEEE Robot Autom Lett 3:3693–3700

    Article  Google Scholar 

  13. Ramezani M, Khoshelham K, Fraser C (2018) Pose estimation by omnidirectional visual-inertial odometry. Robot Auton Syst 105:26–37

    Article  Google Scholar 

  14. Seok H, Lim J (2020) ROVINS: robust omnidirectional visual inertial navigation system. IEEE Robot Autom Lett 5:6225–6232

    Article  Google Scholar 

  15. Pandey G, McBride JR, Eustice RM (2011) Ford campus vision and lidar data set. Int J Robot Res 30:1543–1552

    Article  Google Scholar 

  16. Carlevaris-Bianco N, Ushani AK, Eustice RM (2016) University of Michigan North Campus long-term vision and lidar dataset. Int J Robot Res 35:1023–1035

    Article  Google Scholar 

  17. Benseddik HE, Morbidi F, Caron G, Felsberg M, Nielsen L, Mester R (2020) PanoraMIS: an ultra-wide field of view image dataset for vision-based robot-motion estimation. Int J Robot Res 39:1037–1051

    Article  Google Scholar 

  18. Koschorrek P, Piccini T, Oberg P, Felsberg M, Nielsen L, Mester R, Nielsen L, Mester R (2013) A multi-sensor traffic scene dataset with omnidirectional video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Vol 727–734

  19. Smith M, Baldwin I, Churchill W, Paul R, Newman P (2009) The new college vision and laser data set. Int J Robot Res 28:595–599

    Article  Google Scholar 

  20. Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the kitti dataset. Int J Robot Res 32:1231–1237

    Article  Google Scholar 

  21. Fallon M, Johannsson H, Kaess M, Leonard JJ (2013) The mit stata center dataset. Int J Robot Res 32:1695–1699

    Article  Google Scholar 

  22. Nordlandsbanen: minute by minute, season by season. Norwegian Broadcasting Corporation (2013)

  23. Ceriani S, Fontana G, Giusti A, Marzorati D, Matteucci M, Migliore D, Rizzi D, Domenico GS, Taddei P (2009) Rawseeds ground truth collection systems for indoor self-localization and mapping. Autono Robots 27:353–371

    Article  Google Scholar 

  24. Badino H, Huber D, Kanade T (2011) The CMU visual localization data set

  25. Urban S, Jutzi B (2017) LaFiDa-a laserscanner multi-fisheye camera dataset. J Imaging 3:5

    Article  Google Scholar 

  26. Li Y, Tong G, Gao H, Wang Y, Zhang L, Chen H (2019) Pano-RSOD: a dataset and benchmark for panoramic road scene object detection. Electronics 8:329

    Article  Google Scholar 

  27. Zhang Z, Rebecq H, Forster C, Scaramuzza D (2016) Benefit of large field-of-view cameras for visual odometry. In: 2016 IEEE international Conference on Robotics and Automation (ICRA), pp 801–808

  28. Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 573–580

  29. Schubert D, Goll T, Demmel N, Usenko V, Stückler J, Cremers D (2018) The TUM VI benchmark for evaluating visual-inertial odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1680–1687

  30. Maddern W, Pascoe G, Linegar C, Newman P (2017) 1 year, 1000 km: the Oxford RobotCar dataset. Int J Robot Res 36:3–15

    Article  Google Scholar 

  31. Cheeseman P, Smith R, Self M (1987) A stochastic map for uncertain spatial relationships. In: 4th International symposium on robotic research, pp 467–474

  32. Pandey G, McBride JR, Savarese S, Eustice RM (2015) Automatic extrinsic calibration of vision and lidar by maximizing mutual information. J Field Robot 32:696–722

    Article  Google Scholar 

  33. Horn BK, Hilden HM, Negahdaripour S (1988) Horn, Berthold KP and Hilden, Hugh M and Negahdaripour, Shahriar. JOSA A 5:1127–1135

  34. Mur-Artal R, Montiel JMM, Tardos JD (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot 31:1147–1163

    Article  Google Scholar 

Download references

Acknowledgements

This research was financially supported in part by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R&D program. (Project No. P0004631), and in part by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2021-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gon-Woo Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: During typesetting the e-mail addresses of the authors were interchanged.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Javed, Z., Kim, GW. PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping. J Supercomput 78, 8247–8267 (2022). https://doi.org/10.1007/s11227-021-04198-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-021-04198-1

Keywords

Navigation