Skip to main content

Experimental Analysis of Appearance Maps as Descriptor Manifolds Approximations

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13053))

Included in the following conference series:

Abstract

Images of a given environment, coded by a holistic image descriptor, produce a manifold that is articulated by the camera pose in such environment. The correct articulation of such Descriptor Manifold (DM) by the camera poses is the cornerstone for precise Appearance-based Localization (AbL), which implies knowing the correspondent descriptor for any given pose of the camera in the environment. Since such correspondences are only given at sample pairs of the DM (the appearance map), some kind of regression must be applied to predict descriptor values at unmapped locations. This is relevant for AbL because this regression process can be exploited as an observation model for the localization task. This paper analyses the influence of a number of parameters involved in the approximation of the DM from the appearance map, including the sampling density, the method employed to regress values at unvisited poses, and the impact of the image content on the DM structure. We present experimental evaluations of diverse setups and propose an image metric based on the image derivatives, which allows us to build appearance maps in the form of grids of variable density. A preliminary use case is presented as an initial step for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5297–5307 (2016)

    Google Scholar 

  2. Crowley, J.L., Pourraz, F.: Continuity properties of the appearance manifold for mobile robot position estimation. Image Vis. Comput. 19(11), 741–752 (2001). https://doi.org/10.1016/S0262-8856(00)00108-6

    Article  Google Scholar 

  3. Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008). https://doi.org/10.1177/0278364908090961

    Article  Google Scholar 

  4. Ham, J., Lin, Y., Lee, D.D.: Learning nonlinear appearance manifolds for robot localization. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2971–2976. IEEE (2005). https://doi.org/10.1109/IROS.2005.1545149

  5. Huhle, B., Schairer, T., Schilling, A., Straßer, W.: Learning to localize with gaussian process regression on omnidirectional image data. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5208–5213. IEEE (2010). https://doi.org/10.1109/IROS.2010.5650977

  6. Jaenal, A., Moreno, F.A., Gonzalez-Jimenez, J.: Appearance-based sequential robot localization using a patchwise approximation of a descriptor manifold. Sensors 21(7), 2483 (2021). https://doi.org/10.3390/s21072483

    Article  Google Scholar 

  7. Lopez-Antequera, M., Petkov, N., Gonzalez-Jimenez, J.: Image-based localization using gaussian processes. In: 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–7. IEEE (2016). https://doi.org/10.1109/IPIN.2016.7743697

  8. Lopez-Antequera, M., Petkov, N., Gonzalez-Jimenez, J.: City-scale continuous visual localization. In: 2017 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2017). https://doi.org/10.1109/ECMR.2017.8098692

  9. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Robot. 32(1), 1–19 (2015). https://doi.org/10.1109/TRO.2015.2496823

    Article  Google Scholar 

  10. Maddern, W., Milford, M., Wyeth, G.: CAT-SLAM: probabilistic localisation and mapping using a continuous appearance-based trajectory. Int. J. Robot. Res. 31(4), 429–451 (2012). https://doi.org/10.1177/0278364912438273

    Article  Google Scholar 

  11. Qiu, W., et al.: UnrealCV: virtual worlds for computer vision. In: Proceedings of the 25th ACM International Conference on Multimedia, pp. 1221–1224 (2017). https://doi.org/10.1145/3123266.3129396

  12. Rasmussen, C.E.: Gaussian processes in machine learning. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) ML -2003. LNCS (LNAI), vol. 3176, pp. 63–71. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28650-9_4

    Chapter  Google Scholar 

  13. Sattler, T., et al.: Are large-scale 3D models really necessary for accurate visual localization? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1646 (2017)

    Google Scholar 

  14. Schairer, T., Huhle, B., Vorst, P., Schilling, A., Straßer, W.: Visual mapping with uncertainty for correspondence-free localization using gaussian process regression. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4229–4235. IEEE (2011). https://doi.org/10.1109/IROS.2011.6094530

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  16. Thoma, J., Paudel, D.P., Chhatkuli, A., Probst, T., Gool, L.V.: Mapping, localization and path planning for image-based navigation using visual features and map. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7383–7391 (2019)

    Google Scholar 

  17. Wakin, M.B., Donoho, D.L., Choi, H., Baraniuk, R.G.: The multiscale structure of non-differentiable image manifolds. In: Wavelets XI, vol. 5914, p. 59141B. International Society for Optics and Photonics (2005). https://doi.org/10.1117/12.617822

  18. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

Download references

Acknowledgements

This research was funded by: Government of Spain grant number FPU17/04512; and under projects ARPEGGIO (PID2020-117057) and WISER (DPI2017-84827-R) financed by the Government of Spain and European Regional Development’s funds (FEDER). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal used for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francisco-Angel Moreno .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jaenal, A., Moreno, FA., Gonzalez-Jimenez, J. (2021). Experimental Analysis of Appearance Maps as Descriptor Manifolds Approximations. In: Tsapatsoulis, N., Panayides, A., Theocharides, T., Lanitis, A., Pattichis, C., Vento, M. (eds) Computer Analysis of Images and Patterns. CAIP 2021. Lecture Notes in Computer Science(), vol 13053. Springer, Cham. https://doi.org/10.1007/978-3-030-89131-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89131-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89130-5

  • Online ISBN: 978-3-030-89131-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics