Abstract
Indoor positioning is critical for applications like navigation, tracking, monitoring, and accessibility. For the visually impaired this has a huge implication on independent mobility for accessing all types of services as well as social inclusion. The unavailability of indoor positioning solutions with adequate accuracy is a major constraint. The key reason for the lack of growth in indoor positioning systems is to do with the reliability of indoor positioning techniques and additional infrastructure costs along with maintenance overheads. We propose a novel single camera-based visual positioning solution for indoor spaces. Our method uses smart visual feature selection and matching in real-time using a monocular camera. We record and transform the video route information into spars and invariant point-based SURF features. To limit the real-time feature search and match data, the routes inside the buildings are broken into a connected graph. To find the position, confidence of a path increases if it founds a good feature match and decreases otherwise. Each query frame uses a K-nearest neighbor match with the existing databases to increase the confidence of matched path in subsequent frames. Results have shown a reliable positioning accuracy of \(\sim \) 2 meters in variable lighting conditions. We also investigated the error recovery of positioning systems where it easily re-positions the user within the neighboring edges. To promote crowdsourcing, proposed system can add more visual features to the database while performing the matching task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Morel, J.-M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)
Huang, Y., et al.: Image-based localization for indoor environment using mobile phone. Int. Archives Photogrammetry, Remote Sens. Spatial Inf. Sci. 40(4), 211 (2015)
Bansal, R., Raj, G., Choudhury, T.: Blur image detection using Laplacian operator and Open-CV. In: 2016 International Conference System Modeling & Advancement in Research Trends (SMART), IEEE (2016)
Kushalvyas.: Converting image to bag of words using KMeans on Surf Descriptors and training SVM to generate classes to group similar images. https://kushalvyas.github.io/BOV.html
Mautz, R.: Indoor positioning technologies (Doctoral dissertation, Habilitationsschrift ETH Zürich, 2012) (2012)
Li, K.H.: LiDAR-based Indoor Positioning System (2021)
Tardif, J.-P., Pavlidis, Y., Daniilidis, K.: Monocular visual odometry in urban environments using an omnidirectional camera. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (2008)
Davison, A.J., et al.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)
Sattler, T., Leibe, B., Kobbelt, L.: Fast image-based localization using direct 2D-to-3D matching. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 667–674. IEEE (2011)
Li, Y., Snavely, N., Huttenlocher, D.P.: Location recognition using prioritized feature matching. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 791–804. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15552-9_57
Sinha, D., Ahmed, M.T., Greenspan, M.: Image retrieval using landmark indexing for indoor navigation. In: 2014 Canadian Conference on Computer and Robot Vision (CRV), pp. 63–70 (2014)
Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32
Lategahn, H., Stiller, C.: Vision-only localization. IEEE Trans. Intell. Transp. Syst. 15(3), 1246–1257 (2014)
Li, B., et al.: How feasible is the use of magnetic field alone for indoor positioning? In: International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE (2012)
Husen, M.N., Sukhan, L.: Indoor human localization with orientation using WiFi fingerprinting. In: Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication (2014)
Zhang, C., Zhang, X.: LiTell: robust indoor localization using unmodified light fixtures. In: Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking (2016)
Deretey, E., et al.: Visual indoor positioning with a single camera using PnP. In: International Conference on Indoor Positioning and Indoor Navigation. IEEE (2015)
Jianyong, Z., et al.: RSSI based Bluetooth low energy indoor positioning. In: International Conference on Indoor Positioning and Indoor Navigation, IEEE (2014)
Molnár, M., Luspay, T.: Development of an UWB based indoor positioning system. In: 2020 28th Mediterranean Conference on Control and Automation. IEEE (2020)
Elgendy, M., Guzsvinecz, T., Sik-Lanyi, C.: Identification of markers in challenging conditions for people with visual impairment using convolutional neural network. Appl. Sci. 9(23), 5110 (2019)
Lymberopoulos, D., Liu, J.: The microsoft indoor localization competition: experiences and lessons learned. IEEE Signal Process. Mag. 34(5), 125–140 (2017)
Vikas Upadhyay, Assistech Lab, IIT Delhi, https://youtu.be/b8m0tymUQZc, Code Repo (2020). https://github.com/VikasAssistech/VisualPositioning
Upadhyay, V., Balakrishnan, M.: Accessibility of healthcare facility for persons with visual disability. In: 2021 IEEE International Conference on Pervasive Computing and Communications. IEEE (2021)
Jiao, J., et al.: A smartphone camera-based indoor positioning algorithm of crowded scenarios with the assistance of deep CNN. Sensors 17(4), 704 (2017)
Kang, W., Han, Y.: SmartPDR: smartphone-based pedestrian dead reckoning for indoor localization. IEEE Sensors J. 15(5), 2906–2916 (2014)
Bauer, J., Sünderhauf, N., Protzel, P.: Comparing several implementations of two recently published feature detectors. IFAC Proc. Vol. 40(15), 143–148 (2007)
Acknowledgements
This project was funded and supported by Assistech Lab, at IIT Delhi, India. We are thankful to student Subham, Vishal, and Sushant and other staff and researchers who contributed to this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Upadhyay, V., Balakrishnan, M. (2022). Monocular Localization Using Invariant Image Feature Matching to Assist Navigation. In: Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P. (eds) Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science, vol 13341. Springer, Cham. https://doi.org/10.1007/978-3-031-08648-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-08648-9_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08647-2
Online ISBN: 978-3-031-08648-9
eBook Packages: Computer ScienceComputer Science (R0)