Skip to main content
Log in

Visual Loop-Closure Detection via Prominent Feature Tracking

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Loop-closure detection (LCD) has become an essential part of any simultaneous localization and mapping (SLAM) framework. It provides a means to rectify the drift error, which is typically accumulated along a robot’s trajectory. In this article we propose an LCD method based on tracked visual features, combined with a signal peak-trace filtering approach for loop-closure identification. In particular, local binary features are firstly extracted and tracked through consecutive frames. This way online visual words are generated, which in turn form an incremental bag of visual words (BoVW) vocabulary. Loop-closures (LCs) result from a classification method, which considers current and past state peaks on the similarity matrix. The system discerns the movement of the peaks to identify whether they come about to be true-positive detections or background noise. The suggested peak-trace filtering technique provides exceeding robustness to noisy signals, enabling the usage of only a handful of visual local features per image; thus resulting into a considerably downsized visual vocabulary.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. An, S., Zhu, H., Wei, D., Tsintotas, K.A., Gasteratos A.: Fast and incremental loop closure detection with deep features and proximity graphs. Journal Of Field Robotics (2022)

  2. Angeli, A., Filliat, D., Doncieux, S., Meyer, J.A.: Fast and incremental method for loop-closure detection using bags of visual words. IEEE Trans. Robot. 24(5), 1027–1037 (2008)

    Article  Google Scholar 

  3. Arroyo, R., Alcantarilla, P.F., Bergasa, L.M., Yebes, J.J., Bronte, S.: Fast and effective visual place recognition using binary codes and disparity information. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3089–3094. IEEE (2014)

  4. Balaska, V., Bampis, L., Gasteratos, A.: Graph-based semantic segmentation. In: International Conference on Robotics in Alpe-Adria Danube Region, pp 572–579. Springer (2018)

  5. Balaska, V., Bampis, L., Boudourides, M., Gasteratos, A.: Unsupervised semantic clustering and localization for mobile robotics tasks. Robotics and Autonomous Systems p 103567 (2020)

  6. Balaska, V., Bampis, L., Kansizoglou, I., Gasteratos, A.: Enhancing satellite semantic maps with ground-level imagery. Robot. Auton. Syst. 139, 103760 (2021)

    Article  Google Scholar 

  7. Bampis, L., Amanatiadis, A., Gasteratos, A.: High order visual words for structure-aware and viewpoint-invariant loop closure detection. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4268–4275. IEEE (2017)

  8. Bampis, L., Amanatiadis, A., Gasteratos, A.: Fast loop-closure detection using visual-word-vectors from image sequences. Int. J. Robot. Res. 37(1), 62–82 (2018)

    Article  Google Scholar 

  9. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. In: European Conference on Computer Vision, pp. 404–417. Springer (2006)

  10. Company-Corcoles, J.P., Garcia-Fidalgo, E., Ortiz, A.: Lipo-Lcd: Combining lines and points for appearance-based loop closure detection. In: BMVC (2020)

  11. Cummins, M., Newman, P.: FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance. The International Journal of Robotics Research 27(6), publisher: SAGE Publications Ltd STM (2008)

  12. Cummins, M., Newman, P.: Appearance-only SLAM at large scale with FAB-MAP 2.0. Int. J. Robot. Res. 30(9), 1100–1123 (2011). publisher: SAGE Publications Ltd STM

    Article  Google Scholar 

  13. Dong, R., Wei, Zg, Liu, C., Kan, J.: A novel loop closure detection method using line features. IEEE Access 7, 111245–111256 (2019)

    Article  Google Scholar 

  14. Filliat, D.: A visual bag of words method for interactive qualitative localization and mapping. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, pp 3921–3926. IEEE (2007)

  15. Filliat, D.: Interactive learning of visual topological navigation. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 248–254. IEEE (2008)

  16. Gálvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012)

    Article  Google Scholar 

  17. Garcia-Fidalgo, E., Ortiz, A.: Ibow-lcd: An appearance-based loop-closure detection approach using incremental bags of binary words. IEEE Robot. Autom. Lett. 3(4), 3051–3057 (2018)

    Article  Google Scholar 

  18. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp 3354–3361. IEEE (2012)

  19. Han, J., Dong, R., Kan, J.: A novel loop closure detection method with the combination of points and lines based on information entropy. J. Field Robot. 38(3), 386–401 (2021)

    Article  Google Scholar 

  20. Khan, S., Wollherr, D.: Ibuild: Incremental bag of binary words for appearance based loop closure detection. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp 5441–5447. IEEE (2015)

  21. Kostavelis, I., Gasteratos, A.: Semantic mapping for mobile robotics tasks: a survey. Robot. Auton. Syst. 66, 86–103 (2015)

    Article  Google Scholar 

  22. Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)

    Article  Google Scholar 

  23. Leutenegger, S., Chli, M., Siegwart, R.Y.: Brisk: Binary robust invariant scalable keypoints. In: 2011 International Conference on Computer Vision, pp 2548–2555. IEEE (2011)

  24. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  25. Lowry, S., Sünderhauf, N., Newman, P., Leonard, J.J., Cox, D., Corke, P., Milford, M.J.: Visual place recognition: A survey. IEEE Trans. Robot. 32(1), 1–19 (2015)

    Article  Google Scholar 

  26. Milford, M.J., Wyeth, G.F.: Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights. In: 2012 IEEE International Conference on Robotics and Automation, pp 1643–1649. IEEE (2012)

  27. Muja, M., Lowe, D.G.: Fast matching of binary features. In: 2012 Ninth Conference on Computer and Robot Vision, pp 404–410. IEEE (2012)

  28. Mur-Artal, R., Tardós, J.D.: Fast relocalisation and loop closing in keyframe-based slam. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp 846–853. IEEE (2014)

  29. Nister, D., Stewenius, H.: Scalable Recognition with a Vocabulary Tree. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, pp 2161–2168. IEEE (2006)

  30. Papapetros, I.T., Balaska, V., Gasteratos, A.: Multi-layer Map: Augmenting Semantic Visual Memory. In: 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp 1206–1212. IEEE (2020)

  31. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp 2564–2571. IEEE (2011)

  32. Ruiz-Sarmiento, J.R., Galindo, C., Gonzalez-Jimenez, J.: Building multiversal semantic maps for mobile robot operation. Knowl. Based Syst. 119, 257–272 (2017)

    Article  Google Scholar 

  33. Schiele, B., Crowley, J.L.: Object recognition using multidimensional receptive field histograms. In: European Conference on Computer Vision, pp 610–619. Springer (1996)

  34. Sinaga, K.P., Yang, M.S.: Unsupervised k-means clustering algorithm. IEEE Access 8, 80716–80727 (2020)

    Article  Google Scholar 

  35. Sivic, J., Zisserman, A: Video Google: A text retrieval approach to object matching in videos. In: Null, p 1470. IEEE (2003)

  36. Smith, M., Baldwin, I., Churchill, W., Paul, R., Newman, P.: The new college vision and laser data set. Int. J. Robot. Res. 28(5), 595–599 (2009)

    Article  Google Scholar 

  37. Stenborg, E., Toft, C., Hammarstrand, L.: Long-term visual localization using semantically segmented images. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 6484–6490. IEEE (2018)

  38. Stumm, E.S., Mei, C., Lacroix, S.: Building location models for visual place recognition. Int. J. Robot. Res. 35(4), 334–356 (2016)

    Article  Google Scholar 

  39. Sünderhauf, N., Protzel, P.: Brief-gist-closing the loop by simple means. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1234–1241. IEEE (2011)

  40. Sünderhauf, N., Shirazi, S., Jacobson, A., Dayoub, F., Pepperell, E., Upcroft, B., Milford, M.: Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free. Robotics: Science and Systems XI, 1–10 (2015)

  41. Tomasi, C., Kanade, T.: Detection and tracking of point features (1991)

  42. Traag, V.A., Waltman, L., van Eck, N.J.: From louvain to leiden: guaranteeing well-connected communities. Scientif. Rep. 9(1), 1–12 (2019)

    Google Scholar 

  43. Tsintotas, K.A., Bampis, L., Gasteratos, A.: Assigning visual words to places for loop closure detection. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 1–7. IEEE (2018a)

  44. Tsintotas, K.A., Bampis, L., Gasteratos, A.: Doseqslam: Dynamic on-line sequence based loop closure detection algorithm for slam. In: 2018 IEEE International Conference on Imaging Systems and Techniques (IST), pp 1–6. IEEE (2018b)

  45. Tsintotas, K.A., Bampis, L., Rallis, S., Gasteratos, A.: Seqslam with bag of visual words for appearance based loop closure detection. In: International Conference on Robotics in Alpe-Adria Danube Region, pp 580–587. Springer (2018c)

  46. Tsintotas, K.A., Bampis, L., Gasteratos, A.: Probabilistic appearance-based place recognition through bag of tracked words. IEEE Robot. Autom. Lett. 4(2), 1737–1744 (2019a)

    Article  Google Scholar 

  47. Tsintotas, K.A., Giannis, P., Bampis, L., Gasteratos, A.: Appearance-based loop closure detection with scale-restrictive visual features. In: International Conference on Computer Vision Systems, pp 75–87. Springer (2019b)

  48. Tsintotas, K.A., Bampis, L., An, S., Fragulis, G.F., Mouroutsos, S.G., Gasteratos, A.: Sequence-based mapping for probabilistic visual loop-closure detection. In: 2021 IEEE International Conference on Imaging Systems and Techniques (IST), pp 1–6. IEEE (2021a)

  49. Tsintotas, K.A., Bampis, L., Gasteratos, A.: Modest-vocabulary loop-closure detection with incremental bag of tracked words. Robot. Auton. Syst. 141, 103782 (2021b)

    Article  Google Scholar 

  50. Tsintotas, K.A., Bampis, L., Gasteratos, A., FIET: Tracking-doseqslam: A dynamic sequence-based visual place recognition paradigm. IET Computer Vision 15(4), 258–273 (2021c)

    Article  Google Scholar 

  51. Ulrich, I., Nourbakhsh, I.: Appearance-based place recognition for topological localization. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 2, pp 1023–1029. IEEE (2000)

  52. Valgren, C., Lilienthal, A.J.: Sift, surf & seasons: Appearance-based long-term localization in outdoor environments. Robot. Auton. Syst. 58(2), 149–156 (2010)

    Article  Google Scholar 

  53. Wang, H., Li, J., Ran, M., Xie, L: Fast loop closure detection via binary content. In: 2019 IEEE 15Th International Conference on Control and Automation (ICCA), pp 1563–1568. IEEE (2019)

  54. Zaffar, M., Ehsan, S., Milford, M., McDonald-Maier, K.: Cohog: a light-weight, compute-efficient, and training-free visual place recognition technique for changing environments. IEEE Robot. Autom. Lett. 5(2), 1835–1842 (2020)

    Article  Google Scholar 

  55. Zhang, G., Lilly, M.J., Vela, P.A.: Learning binary features online from motion dynamics for incremental loop-closure detection and place recognition. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp 765–772. IEEE (2016)

Download references

Acknowledgment

We acknowledge support of this work by the project “Study, Design, Development and Implementation of a Holistic System for Upgrading the Quality of Life and Activity of the Elderly” (MIS 5047294) which is implemented under the Action “Support for Regional Excellence”, funded by the Operational Programme “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ioannis Tsampikos Papapetros.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Papapetros, I.T., Balaska, V. & Gasteratos, A. Visual Loop-Closure Detection via Prominent Feature Tracking. J Intell Robot Syst 104, 54 (2022). https://doi.org/10.1007/s10846-022-01581-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01581-9

Keywords

Navigation