Skip to main content
Log in

Image-only place recognition based on regional aggregating ConvNet features for underground parking lots

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Place recognition searches the closest map node to the query node, which is an important task for vehicle localization. Traditional visual place recognition methods for underground parking lots require the deployment of additional location signals, such as WiFi, Bluetooth. This paper utilizes only front-view images to realize place recognition. First, we employ a random coefficient to reduce the dimensionality of the ConvNet features to obtain the CCFs (Condense ConvNet Features). Second, we average the CCFs of a regional zone to obtain the RACF (Regional Aggregating ConvNet Feature). Compared with WiFi, Bluetooth, RACF is extracted from the front-view image and has a superior ability to represent regional zones. Third, we propose a multiscale place recognition method that adopts a coarse-to-fine strategy that greatly reduces time consumption and accelerates precision. Finally, we evaluate the proposed method on the data collected in the underground parking lot of Hubei University of Technology. The experimental results illustrate that the proposed method has high precision with a fast speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Data sharing are not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Huang, G., Hu, Z.Z., Wu, J., Xiao, H., Zhang, F.: WiFi and vision integrated fingerprint for smartphone-based self-localization in public indoor scenes. IEEE Internet Things J. 7(8), 6748–6761 (2020)

    Article  Google Scholar 

  2. Lin, L.L., Zhang, W.W., Cheng, M., Wen, C.L., Wang, C.: Planar primitive group-based point cloud registration for autonomous vehicle localization in underground parking lots. IEEE Geosci. Remote Sens. Lett. 99, 1–5 (2021)

    Google Scholar 

  3. Chang, M. Y., Yeon, S., Ryu, S., Lee, D.: SpoxelNet: spherical voxel-based deep place recognition for 3D point clouds of crowded indoor spaces, In: Processing of Processing of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.8564–8570, (2020).

  4. Hu, Z. Z., Huang, G., Hu, Y. Z., Yang, Z.: WI-VI fingerprint: WiFi and vision integrated fingerprint for smartphone-based indoor self-localization, In Processing of 2017 IEEE Internal Conference on Image Processing, pp.4402–4406, (2017).

  5. Zhao, J.Q., Huang, Y.W., He, X.D., Zhang, S.M., Chen, Y., Feng, T.T., Xiong, L.: Visual semantic landmark-based robust mapping and localization for autonomous indoor parking. Sensor 19(1), 161 (2019)

    Article  Google Scholar 

  6. Yuan, J., Zhu, W.B., Dong, X.L., Sun, F.C., Zhang, X.B., Sun, Q.X., Huang, Y.L.: A novel approach to image-sequence-based mobile robot place recognition. IEEE Transact. Syst. Man Cybernet. Syst. 51(9), 5377–5391 (2019)

    Article  Google Scholar 

  7. Fatih, T., Mehmet, K.P., Ali, M.T.: technological viability assessment of bluetooth low energy technology for indoor localization. J. Comput. Civ. Eng. 32(5), 04018034 (2018)

    Article  Google Scholar 

  8. Poulose, A., Han, D.S.: UWB indoor localization using deep learning LSTM networks. Appl. Sci. 10(18), 6290 (2020)

    Article  Google Scholar 

  9. Li, S.L., Guo, S.S., Chen, J.H., Yang, X.Q., Fan, S.H., Jia, C., Cui, G.L., Yang, H.N.: Multiple targets localization behind L-shaped corner via UWB radar. IEEE Transact. Veh. Technol. 70(4), 3087–3100 (2021)

    Article  Google Scholar 

  10. Gobi, R.: Smartphone based indoor localization and tracking model using bat algorithm and Kalman filter. Multimed Tools Appl. 80(10), 15377–15390 (2021)

    Article  Google Scholar 

  11. Zhang, Z.Y., He, S.B., Shu, Y.C., Shi, Z.G.: A self-evolving WiFi-based indoor navigation system using smartphones. IEEE Trans. Mob. Comput. 19(8), 1760–1774 (2020)

    Google Scholar 

  12. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  13. Lowry, S., Sunderhauf, N., Newman, P., Leonard, J.J., Cox, D., Corke, P., Milford, M.J.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2016)

    Article  Google Scholar 

  14. Lin, C., Qiao, T.Z.: Localization in the parking lot by parked-vehicle assistance. IEEE Trans. Intell. Transp. Syst. 7(12), 3629–3634 (2016)

    Google Scholar 

  15. Jan, Y.G., Tseng, H.W., Lee, Y.H., Lo, C.Y., Yen, L.Y.: Accurate bluetooth positioning using weighting and large number of devices measurements. Wireless Pers. Commun. 79(2), 1129–1143 (2014)

    Article  Google Scholar 

  16. Wang, C.X., Yang, M., Wang, B., Xu, H.G., Li, H.: Improved intelligent vehicle localization using magnetic ruler. Int. J. Comput. Intell. Syst. 4(3), 394–401 (2012)

    Google Scholar 

  17. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)

    Article  Google Scholar 

  18. Rublee, E., Rabaud, V., Konolige, K., Bradski, G. B.: ORB: an efficient alternative to SIFT or SURF, In Processing of 2012 IEEE International Conference on Computer Vision, pp.2564–2571, (2012).

  19. Ma, J.Y., Zhao, J., Jiang, J.J., Zhou, H.B., Gao, X.J.: Locality preserving matching. Int. J. Comput. Vision 127, 512–531 (2018)

    Article  MathSciNet  Google Scholar 

  20. Nazemzadeh, P., Fontanelli, D., Macii, D., Palopoli, L.: Indoor localization of mobile robots through QR code detection and dead reckoning data fusion. IEEE/ASME Trans. Mechatron. 22(6), 2588–2599 (2017)

    Article  Google Scholar 

  21. Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., Milford, M.: On the performance of ConvNet features for place recognition, In Processing of 2015 IEEE International Conference on Intelligent Robots and System, pp.4297–4304, (2015).

  22. Gao, N. Y., Shan, Y. H., Wang, Y. P., Zhao, X., Yu, Y. N., Yang, M., Huang, K. Q. SSAP: single-shot instance segmentation with affinity pyramid, In Processing of 2019 IEEE International Conference on Computer Vision, pp.642–651, (2019).

  23. Hausler, S., Garg, S., Xu, M, Milford, M., Fischer, T.: Patch-NetVLAD: multi-scale fusion of locally-global descriptors for place recognition, In Processing of 2021 IEEE Conference on Computer Vision and Pattern Recognition, pp.14141–14152, (2021).

  24. Tang, L., Deng, Y., Ma, Y., Huang, J., Ma, J.: SuperFusion: a versatile image registration and fusion network with semantic awareness. IEEE/CAA J. Automatica Sinica 9(12), 2121–2137 (2022)

    Article  Google Scholar 

  25. Yu, J., Tan, M., Zhang, H., Rui, Y., Tao, D.: Hierarchical deep click feature prediction for fine-grained image recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(2), 563–578 (2022)

    Article  Google Scholar 

  26. Cao, B.Y., Araujo, A., Sim, J.: Unifying deep local and global features for image search, In Processing of 2020 European Conference on Computer Vision, pp.726–743, (2020).

  27. Yu, J., Zhu, C., Zhang, J., Huang, Q., Tao, D.: Spatial pyramid-enhanced NetVLAD with weighted triplet loss for place recognition. IEEE Transact. Neural Netw. Learn. Syst. 31(2), 661–674 (2020)

    Article  Google Scholar 

  28. Zhang, K.N., Li, Z.Z., Ma, J.Y.: Loop closure detection with bidirectional manifold representation consensus. IEEE Trans. Intell. Transp. Syst. 99(99), 1–14 (2022)

    Google Scholar 

  29. Ye, X.Y., Ma, J.Y.: Neighborhood manifold preserving matching for visual place recognition. IEEE Trans. Industr. Inf. 99(99), 1–10 (2022)

    Google Scholar 

  30. Wang, X.L., Hu, Z.Z., Tao, Q.W., Huang, G., Mu, M.C.: Bayesian place recognition based on bag of objects for intelligent vehicle localisation. IET Intel. Transport Syst. 13(11), 1736–1744 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

This work presented in this paper was funded by the National Natural Science Foundation of China (61502155), the National Natural Science Foundation of China (61772180), the Project of Xiangyang Research Institute of Hubei University of Technology (No. XYYJ2022C08), Fujian Provincial Key Laboratory of Data intensive Computing and Key Laboratory of intelligent Computing and information Processing, Fujian: BD201801.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhigang Xu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Zhu, X., Yan, Z. et al. Image-only place recognition based on regional aggregating ConvNet features for underground parking lots. Vis Comput 40, 1167–1177 (2024). https://doi.org/10.1007/s00371-023-02838-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02838-6

Keywords

Navigation