Abstract
There is a growing number of autonomous driving datasets that can be used to benchmark vision and LiDAR based place recognition and localization methods. The same sensor modalities, vision and depth, are important for indoor localization and navigation as well, but there is a lack of large indoor datasets. This work presents a realistic indoor dataset for long-term evaluation of place recognition and localization methods. The dataset contains RGB and LiDAR sequences captured inside campus buildings over a time period of nine months and in various illumination and occupancy conditions. The dataset contains three typical indoor spaces: office, basement and foyer. We describe collection of the dataset and in the experimental part we report results for the two state-of-the-art deep learning place recognition methods. The data will be available through https://github.com/lasuomela/TAU-Indoors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agarwal, S., Vora, A., Pandey, G., Williams, W., Kourous, H., McBride, J.: Ford multi-av seasonal dataset. Int. J. Rob. Res. 39(12), 1367–1376 (2020). https://doi.org/10.1177/0278364920961451
Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. TPAMI (2018)
Barnes, D., Gadd, M., Murcutt, P., Newman, P., Posner, I.: The oxford radar robotcar dataset: a radar extension to the oxford robotcar dataset. In: ICRA (2020)
Blanco-Claraco, J.L., Ángel Moreno-Dueñas, F., González-Jiménez, J.: The málaga urban dataset: high-rate stereo and lidar in a realistic urban scenario. Int. J. Rob. Res. 33(2), 207–214 (2014). https://doi.org/10.1177/0278364913507326
Bonin-Font, F., Ortiz, A., Oliver, G.: Visual navigation for mobile robots: a survey. J. Intell. Robot. Syst. 53, 263–296 (2008)
Chen, D.M., et al.: City-scale landmark identification on mobile devices. In: CVPR, pp. 737–744 (2011). https://doi.org/10.1109/CVPR.2011.5995610
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)
Galvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Rob. 28(5), 1188–1197 (2012). https://doi.org/10.1109/TRO.2012.2197158
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)
Jégou, H., Douze, M., Schmid, C., Pérez, P.: Aggregating local descriptors into a compact image representation. In: CVPR, pp. 3304–3311 (2010)
Labbé, M., Michaud, F.: Rtab-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Rob. 36(2), 416–446 (2019). https://doi.org/10.1002/rob.21831, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21831
Leyva-Vallina, M., Strisciuglio, N., López Antequera, M., Tylecek, R., Blaich, M., Petkov, N.: Tb-places: a data set for visual place recognition in garden environments. IEEE Access 7, 52277–52287 (2019)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94
Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2016). https://doi.org/10.1109/TRO.2015.2496823
Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 year, 1000 km: the oxford robotcar dataset. Int. J. Rob. Res. 36(1), 3–15 (2017). https://doi.org/10.1177/0278364916679498
Mur-Artal, R., Tardos, J.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)
Pandey, G., McBride, J.R., Eustice, R.M.: Ford campus vision and lidar data set. Int. J. Rob. Res. 30(13), 1543–1552 (2011). https://doi.org/10.1177/0278364911400640
Peltomäki, J., Alijani, F., Puura, J., Huttunen, H., Rahtu, E., Kämäräinen, J.K.: Evaluation of long-term lidar place recognition. In: IROS (2021)
Pitropov, M., et al.: Canadian adverse driving conditions dataset. Int. J. Rob. Res. 40(4–5), 681–690 (2021). https://doi.org/10.1177/0278364920979368
Pronobis, A., Caputo, B.: Cold: the cosy localization database. Int. J. Rob. Res. 28(5), 588–594 (2009). https://doi.org/10.1177/0278364909103912
Quigley, M., et al.: Ros: an open-source robot operating system. In: ICRA Workshop (2009)
Radenović, F., Tolias, G., Chum, O.: Fine-tuning CNN image retrieval with no human annotation. IEEE TPAMI 41(7), 1655–1668 (2019). https://doi.org/10.1109/TPAMI.2018.2846566
Revaud, J., Almazan, J., Rezende, R., Souza, C.D.: Learning with average precision: training image retrieval with a listwise loss. In: ICCV (2019)
Sanchez-Belenguer, C., Wolfart, E., Casado-Coscolla, A., Sequeira, V.: RISEdb: a novel indoor localization dataset. In: International Conference on Pattern Recognition (ICPR) (2020)
Sarlin, P., Cadena, C., Siegwart, R., Dymczyk, M.: From coarse to fine: robust hierarchical localization at large scale. In: CVPR (2019)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR (2020)
Sattler, T., Weyand, T., Leibe, B., Kobbelt, L.: Image retrieval for image-based localization revisited. In: BMVC (2012)
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)
Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., Fitzgibbon, A.: Scene coordinate regression forests for camera relocalization in rgb-d images. In: CVPR (2013)
Taira, H., et al.: Inloc: indoor visual localization with dense matching and view synthesis. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1293–1307 (2021). https://doi.org/10.1109/TPAMI.2019.2952114
Toft, C., Maddern, W., Torii, A., et al.: Long-term visual localization revisited. IEEE Trans. PAMI 44, 2074–2088 (2020)
Zaffar, M., et al.: VPR-bench: an open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change. Int. J. Comput. Vis. 129, 2136–2174 (2021)
Zhang, X., Wang, L., Su, Y.: Visual place recognition: a survey from deep learning perspective. Pattern Recogn., 107760 (2020). https://doi.org/10.1016/j.patcog.2020.107760
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dag, A. et al. (2023). TAU-Indoors Dataset for Visual and LiDAR Place Recognition. In: Gade, R., Felsberg, M., Kämäräinen, JK. (eds) Image Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13886. Springer, Cham. https://doi.org/10.1007/978-3-031-31438-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-31438-4_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31437-7
Online ISBN: 978-3-031-31438-4
eBook Packages: Computer ScienceComputer Science (R0)