Skip to main content

TAU-Indoors Dataset for Visual and LiDAR Place Recognition

  • Conference paper
  • First Online:
Image Analysis (SCIA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13886))

Included in the following conference series:

  • 607 Accesses

Abstract

There is a growing number of autonomous driving datasets that can be used to benchmark vision and LiDAR based place recognition and localization methods. The same sensor modalities, vision and depth, are important for indoor localization and navigation as well, but there is a lack of large indoor datasets. This work presents a realistic indoor dataset for long-term evaluation of place recognition and localization methods. The dataset contains RGB and LiDAR sequences captured inside campus buildings over a time period of nine months and in various illumination and occupancy conditions. The dataset contains three typical indoor spaces: office, basement and foyer. We describe collection of the dataset and in the experimental part we report results for the two state-of-the-art deep learning place recognition methods. The data will be available through https://github.com/lasuomela/TAU-Indoors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, S., Vora, A., Pandey, G., Williams, W., Kourous, H., McBride, J.: Ford multi-av seasonal dataset. Int. J. Rob. Res. 39(12), 1367–1376 (2020). https://doi.org/10.1177/0278364920961451

    Article  Google Scholar 

  2. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. TPAMI (2018)

    Google Scholar 

  3. Barnes, D., Gadd, M., Murcutt, P., Newman, P., Posner, I.: The oxford radar robotcar dataset: a radar extension to the oxford robotcar dataset. In: ICRA (2020)

    Google Scholar 

  4. Blanco-Claraco, J.L., Ángel Moreno-Dueñas, F., González-Jiménez, J.: The málaga urban dataset: high-rate stereo and lidar in a realistic urban scenario. Int. J. Rob. Res. 33(2), 207–214 (2014). https://doi.org/10.1177/0278364913507326

    Article  Google Scholar 

  5. Bonin-Font, F., Ortiz, A., Oliver, G.: Visual navigation for mobile robots: a survey. J. Intell. Robot. Syst. 53, 263–296 (2008)

    Article  Google Scholar 

  6. Chen, D.M., et al.: City-scale landmark identification on mobile devices. In: CVPR, pp. 737–744 (2011). https://doi.org/10.1109/CVPR.2011.5995610

  7. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)

    Google Scholar 

  8. Galvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Rob. 28(5), 1188–1197 (2012). https://doi.org/10.1109/TRO.2012.2197158

    Article  Google Scholar 

  9. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: CVPR (2012)

    Google Scholar 

  10. Jégou, H., Douze, M., Schmid, C., Pérez, P.: Aggregating local descriptors into a compact image representation. In: CVPR, pp. 3304–3311 (2010)

    Google Scholar 

  11. Labbé, M., Michaud, F.: Rtab-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Rob. 36(2), 416–446 (2019). https://doi.org/10.1002/rob.21831, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21831

  12. Leyva-Vallina, M., Strisciuglio, N., López Antequera, M., Tylecek, R., Blaich, M., Petkov, N.: Tb-places: a data set for visual place recognition in garden environments. IEEE Access 7, 52277–52287 (2019)

    Article  Google Scholar 

  13. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  14. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2016). https://doi.org/10.1109/TRO.2015.2496823

    Article  MathSciNet  Google Scholar 

  15. Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 year, 1000 km: the oxford robotcar dataset. Int. J. Rob. Res. 36(1), 3–15 (2017). https://doi.org/10.1177/0278364916679498

    Article  Google Scholar 

  16. Mur-Artal, R., Tardos, J.: ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  17. Pandey, G., McBride, J.R., Eustice, R.M.: Ford campus vision and lidar data set. Int. J. Rob. Res. 30(13), 1543–1552 (2011). https://doi.org/10.1177/0278364911400640

    Article  Google Scholar 

  18. Peltomäki, J., Alijani, F., Puura, J., Huttunen, H., Rahtu, E., Kämäräinen, J.K.: Evaluation of long-term lidar place recognition. In: IROS (2021)

    Google Scholar 

  19. Pitropov, M., et al.: Canadian adverse driving conditions dataset. Int. J. Rob. Res. 40(4–5), 681–690 (2021). https://doi.org/10.1177/0278364920979368

    Article  Google Scholar 

  20. Pronobis, A., Caputo, B.: Cold: the cosy localization database. Int. J. Rob. Res. 28(5), 588–594 (2009). https://doi.org/10.1177/0278364909103912

    Article  Google Scholar 

  21. Quigley, M., et al.: Ros: an open-source robot operating system. In: ICRA Workshop (2009)

    Google Scholar 

  22. Radenović, F., Tolias, G., Chum, O.: Fine-tuning CNN image retrieval with no human annotation. IEEE TPAMI 41(7), 1655–1668 (2019). https://doi.org/10.1109/TPAMI.2018.2846566

    Article  Google Scholar 

  23. Revaud, J., Almazan, J., Rezende, R., Souza, C.D.: Learning with average precision: training image retrieval with a listwise loss. In: ICCV (2019)

    Google Scholar 

  24. Sanchez-Belenguer, C., Wolfart, E., Casado-Coscolla, A., Sequeira, V.: RISEdb: a novel indoor localization dataset. In: International Conference on Pattern Recognition (ICPR) (2020)

    Google Scholar 

  25. Sarlin, P., Cadena, C., Siegwart, R., Dymczyk, M.: From coarse to fine: robust hierarchical localization at large scale. In: CVPR (2019)

    Google Scholar 

  26. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR (2020)

    Google Scholar 

  27. Sattler, T., Weyand, T., Leibe, B., Kobbelt, L.: Image retrieval for image-based localization revisited. In: BMVC (2012)

    Google Scholar 

  28. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)

    Google Scholar 

  29. Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., Fitzgibbon, A.: Scene coordinate regression forests for camera relocalization in rgb-d images. In: CVPR (2013)

    Google Scholar 

  30. Taira, H., et al.: Inloc: indoor visual localization with dense matching and view synthesis. IEEE Trans. Pattern Anal. Mach. Intell. 43(4), 1293–1307 (2021). https://doi.org/10.1109/TPAMI.2019.2952114

    Article  Google Scholar 

  31. Toft, C., Maddern, W., Torii, A., et al.: Long-term visual localization revisited. IEEE Trans. PAMI 44, 2074–2088 (2020)

    Article  Google Scholar 

  32. Zaffar, M., et al.: VPR-bench: an open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change. Int. J. Comput. Vis. 129, 2136–2174 (2021)

    Article  Google Scholar 

  33. Zhang, X., Wang, L., Su, Y.: Visual place recognition: a survey from deep learning perspective. Pattern Recogn., 107760 (2020). https://doi.org/10.1016/j.patcog.2020.107760

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lauri Suomela .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dag, A. et al. (2023). TAU-Indoors Dataset for Visual and LiDAR Place Recognition. In: Gade, R., Felsberg, M., Kämäräinen, JK. (eds) Image Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13886. Springer, Cham. https://doi.org/10.1007/978-3-031-31438-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31438-4_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31437-7

  • Online ISBN: 978-3-031-31438-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics