Skip to main content
Log in

CamNav: a computer-vision indoor navigation system

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

We present CamNav, a vision-based navigation system that provides users with indoor navigation services. CamNav captures images in real time while the user is walking to recognize their current location. It does not require any installation of indoor localization devices. In this paper, we describe the techniques of our system that improve the recognition accuracy of an existing system that uses oriented FAST and rotated BRIEF (ORB) as part of its location-matching procedure. We employ multiscale local binary pattern (MSLBP) features to recognize places. We implement CamNav and conduct required experiments to compare the obtained accuracy when using ORB, the scale-invariant feature transform (SIFT), MSLBP features, and the combination of both ORB and SIFT features with MSLBP. A dataset composed of 42 classes was constructed for assessment. Each class contains 100 pictures designed for training one location and 24 pictures dedicated for testing. The evaluation results demonstrate that the place recognition accuracy while using MSLBP features is better than the accuracy when using SIFT features. The accuracy when using SIFT, MSLBP, and ORB features is 88.19%, 91.27%, and 96.33%, respectively. The overall accuracy of recognizing places increased to 93.55% and 97.52% after integrating MSLBP with SIFT with ORB, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. TensorFlow: https://www.tensorflow.org/.

  2. CamNav-dataset: https://github.com/akarkar/CamNav-dataset.

  3. NVIDIA Technology: https://www.nvidia.com/ .

  4. RStudio: http://www.rstudio.com/.

References

  1. Adorno J, DeLaHoz Y, Labrador MA (2016) Smartphone-based floor detection in unstructured and structured environments. In: 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). IEEE, pp 1–6

  2. Ahmetovic D, Gleason C, Ruan C, Kitani K, Takagi H, Asakawa C (2016) Navcog: a navigational cognitive assistant for the blind. In: Proceedings of the 18th International Conference on Human–Computer Interaction with Mobile Devices and Services. ACM, pp 90–99

  3. Athira SV, George M, Jose BR, Mathew J (2017) A global image descriptor based navigation system for indoor environment. Procedia Comput Sci 115:466–473

    Article  Google Scholar 

  4. Beis JS, Lowe DG (1997) Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: CVPR, vol 97. Citeseer, p 1000

  5. Canny J (1987) A computational approach to edge detection. In: Readings in Computer Vision. Elsevier, Amsterdam, pp 184–203

  6. Chen Y, Chen R, Liu M, Xiao A, Wu D, Zhao S (2018) Indoor visual positioning aided by CNN-based image retrieval: training-free, 3d modeling-free. Sensors 18(8):2692

    Article  Google Scholar 

  7. Cheng J, Zhu X, Ding W, Gao G (2016) A robust real-time indoor navigation technique based on GPU-accelerated feature matching. In: 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, pp 1–4

  8. Costa P, Fernandes H, Martins P, Barroso J, Hadjileontiadis LJ (2012) Obstacle detection using stereo imaging to assist the navigation of visually impaired people. Procedia Comput Sci 14:83–93

    Article  Google Scholar 

  9. Deniz O, Paton J, Salido J, Bueno G, Ramanan J (2014) A vision-based localization algorithm for an indoor navigation app. In: 2014 Eighth International Conference on Next Generation Mobile Apps, Services and Technologies. IEEE, pp 7–12

  10. Doush IA, Alshatnawi S, Al-Tamimi AK, Alhasan B, Hamasha S (2017) Isab: integrated indoor navigation system for the blind. Interact Comput 29(2):181–202

    Google Scholar 

  11. Eshratifar AE, Pedram M (2018) Energy and performance efficient computation offloading for deep neural networks in a mobile cloud computing environment. In: Proceedings of the 2018 on Great Lakes Symposium on VLSI. ACM, pp 111–116

  12. Garcia G, Nahapetian A (2015) Wearable computing for image-based indoor navigation of the visually impaired. In: Proceedings of the Conference on Wireless Health. ACM, p 17

  13. Huang Z, Gu N, Hao J, Shen J (2018) 3dloc: 3d features for accurate indoor positioning. Proc ACM Interact Mob Wear Ubiq Technol 1(4):141

    Google Scholar 

  14. Ivanov R (2012) Rsnavi: an RFID-based context-aware indoor navigation system for the blind. In: Proceedings of the 13th International Conference on Computer Systems and Technologies. ACM, pp 313–320

  15. Jafri R, Campos RL, Ali SA, Arabnia HR (2018) Visual and infrared sensor data-based obstacle detection for the visually impaired using the Google project tango tablet development kit and the unity engine. IEEE Access 6:443–454

    Article  Google Scholar 

  16. Kacorri H, Mascetti S, Gerino A, Ahmetovic D, Takagi H, Asakawa C (2016) Supporting orientation of people with visual impairment: analysis of large scale usage data. In: Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, pp 151–159

  17. Karami E, Prasad S, Shehata M (2017) Image matching using sift, surf, brief and orb: performance comparison for distorted images. arXiv preprint arXiv:1710.02726

  18. Kawaji H, Hatada K, Yamasaki T, Aizawa K (2010) Image-based indoor positioning system: fast image matching using omnidirectional panoramic images. In: Proceedings of the 1st ACM International Workshop on Multimodal Pervasive Video Analysis. ACM, pp 1–4

  19. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436

    Article  Google Scholar 

  20. Lee YH, Medioni G (2016) RGB-D camera based wearable navigation system for the visually impaired. Comput Vis Image Underst 149:3–20

    Article  Google Scholar 

  21. Li L, Xu Q, Chandrasekhar V, Lim JH, Tan C, Mukawa MA (2016) A wearable virtual usher for vision-based cognitive indoor navigation. IEEE Trans Cybern 47(4):841–854

    Article  Google Scholar 

  22. Manlises C, Yumang A, Marcelo M, Adriano A, Reyes J (2016) Indoor navigation system based on computer vision using camshift and d* algorithm for visually impaired. In: 2016 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE). IEEE, pp 481–484

  23. Murillo AC, Gutiérrez-Gómez D, Rituerto A, Puig L, Guerrero JJ (2012) Wearable omnidirectional vision system for personal localization and guidance. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, pp 8–14

  24. Noh Y, Yamaguchi H, Lee U (2018) Infrastructure-free collaborative indoor positioning scheme for time-critical team operations. IEEE Trans Syst Man Cybern Syst 48(3):418–432

    Article  Google Scholar 

  25. Pietikäinen M (2010) Local binary patterns. Scholarpedia 5(3):9775

    Article  Google Scholar 

  26. Pietikäinen M, Hadid A, Zhao G, Ahonen T (2011) Computer vision using local binary patterns, vol 40. Springer, Berlin

    Book  Google Scholar 

  27. Raikwal J, Saxena K (2012) Performance evaluation of SVM and k-nearest neighbor algorithm over medical data set. Int J Comput Appl 50(14):35–39

    Google Scholar 

  28. Raja Y, Gong S (2006) Sparse multiscale local binary patterns. In: BMVC, pp 799–808

  29. Riggs W, Gordon K (2017) How is mobile technology changing city planning? developing a taxonomy for the future. Environ Plan B Urban Anal City Sci 44(1):100–119

    Article  Google Scholar 

  30. Rublee E, Rabaud V, Konolige K, Bradski GR (2011) Orb: an efficient alternative to sift or surf. In: ICCV, vol 11. Citeseer, p 2

  31. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  32. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  33. Sonka M, Hlavac V, Boyle R (2014) Image processing, analysis, and machine vision. In: Cengage Learning

  34. Srivastava P, Khare A (2018) Utilizing multiscale local binary pattern for content-based image retrieval. Multimed Tools Appl 77(10):12377–12403

    Article  Google Scholar 

  35. Starner T (2013) Project glass: an extension of the self. IEEE Pervas Comput 12(2):14–16

    Article  Google Scholar 

  36. Tian Y, Yang X, Arditi A (2010) Computer vision-based door detection for accessibility of unfamiliar environments to blind persons. In: International Conference on Computers for Handicapped Persons. Springer, Berlin, pp 263–270

  37. Tian Y, Yang X, Yi C, Arditi A (2013) Toward a computer vision-based wayfinding aid for blind persons to access unfamiliar indoor environments. Mach Vis Appl 24(3):521–535

    Article  Google Scholar 

  38. Tuta J, Juric MB (2018) Mfam: multiple frequency adaptive model-based indoor localization method. Sensors 18(4):963

    Article  Google Scholar 

  39. Wang E, Yan W (2014) inavigation: an image based indoor navigation system. Multimed Tools Appl 73(3):1597–1615

    Article  Google Scholar 

  40. Weigend AS (2018) Time series prediction: forecasting the future and understanding the past. Routledge, London

    Book  Google Scholar 

  41. Xiao A, Chen R, Li D, Chen Y, Wu D (2018) An indoor positioning system based on static objects in large indoor scenes by using smartphone cameras. Sensors 18(7):2229

    Article  Google Scholar 

  42. Zheng Y, Luo P, Chen S, Hao J, Cheng H (2017) Visual search based indoor localization in low light via RGB-D camera. World Acad Sci Eng Technol Int J Comput Electr Autom Control Inf Eng 11(3):349–352

    Google Scholar 

  43. Zheng Y, Shen G, Li L, Zhao C, Li M, Zhao F (2017) Travi-navi: self-deployable indoor navigation system. IEEE/ACM Trans Netw 25(5):2655–2669

    Article  Google Scholar 

Download references

Acknowledgements

This publication was supported by Qatar University Collaborative High Impact Grant QUHI-CENG-18/19-1. The findings achieved herein are solely the responsibility of the authors. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdel Ghani Karkar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karkar, A.G., Al-Maadeed, S., Kunhoth, J. et al. CamNav: a computer-vision indoor navigation system. J Supercomput 77, 7737–7756 (2021). https://doi.org/10.1007/s11227-020-03568-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-020-03568-5

Keywords

Navigation