Skip to main content
Log in

Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

In order to help the visually impaired as they navigate unfamiliar environment such as public buildings, this paper presents a novel smart phone, vision-based indoor localization, and guidance system, called Seeing Eye Phone. This system requires a smart phone from the user and a server. The smart phone captures and transmits images of the user facing forward to the server. The server processes the phone images to detect and describe 2D features by SURF and then matches them to the 2D features of the stored map images that include their corresponding 3D information of the building. After features are matched, Direct Linear Transform runs on a subset of correspondences to find a rough initial pose estimate and the Levenberg–Marquardt algorithm further refines the pose estimate to find a more optimal solution. With the estimated pose and the camera’s intrinsic parameters, the location and orientation of the user are calculated using 3D location correspondence data stored for features of each image. Positional information is then transmitted back to the smart phone and communicated to the user via text-to-speech. This indoor guiding system uses efficient algorithms such as SURF, homographs, multi-view geometry, and 3D to 2D reprojection to solve a very unique problem that will benefit the visually impaired. The experimental results demonstrate the feasibility of using a simple machine vision system design to accomplish a complex task and the potential of building a commercial product based on this design.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. National Federation of the Blind: The Braille Literacy Crisis in America: Facing the Truth, Reversing the Trend, Empowering the Blind. http://www.nfb.org/images/nfb/documents/word/The_Braille_Literacy_Crisis_In_America.doc (2009)

  2. R.N.I. for the Blind: Blind and Partially Sighted Adults in Britain: v. 1: The R.N.I.B. Survey. Stationery Office Books, London (1991)

  3. Shoval, S., Ulrich, I., Borenstein, J.: NavBelt and the Guide–Cane. IEEE Robot. Autom. Mag. 10(1), 9–20 (2003)

    Article  Google Scholar 

  4. Upson, S.: Tongue vision: a fuzzy outlook for an unpalatable technology. Spect. IEEE 44(1), 44–45 (2007)

    Article  Google Scholar 

  5. Golledge, R.G., Marston, J.R., Loomis, J.M., Klatzky, R.L.: Stated preferences for components of a personal guidance system for nonvisual navigation. Design 98, 135–147 (2004)

    Google Scholar 

  6. Xu, J., Fang z. G., Dong D. H., Zhou F.: An outdoor navigation aid system for the visually impaired. In: 2010 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 2435–2439 (2010)

  7. Sanchez, J.H., Aguayo, F.A., Hassler, T.M.: Independent outdoor mobility for the blind. Virtual Rehabil. 2007, 114–120 (2007)

    Google Scholar 

  8. Moulton, B., Pradhan, G., Chaczko, Z., Moulton, B., Pradhan, G., Chaczko, Z.: Voice operated guidance systems for vision impaired people: investigating a user-centered open source model. JDCTA 3(4), 60–68 (2009)

    Google Scholar 

  9. Moulton, B., Pradhan, G., Chaczko, Z., Moulton, B., Pradhan, G., Chaczko, Z.: Voice operated guidance systems for vision impaired people: investigating a user-centered open source model. JDCTA 3(4), 60–68 (2009)

    Google Scholar 

  10. Tatsumi H., Murai Y., Araki T., Miyakawa M.: RFID localization for the visually impaired. In: Automation Congress, 2008. WAC 2008. pp. 1–6 (2008)

  11. Yelamarthi K., Haas D., Nielsen D., Mothersell S.: RFID and GPS integrated navigation system for the visually impaired. In: 2010 53rd IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1149–1152 (2010)

  12. Kulyukin V., Gharpure C., Nicholson J., Pavithran S.: RFID in robot-assisted indoor navigation for the visually impaired. In: 2004 IEEERSJ International Conference on Intelligent Robots and Systems. vol. 2, no. 6, pp. 1979–1984 (2004)

  13. Na J.: The blind interactive guide system using RFID-based indoor positioning system. In: Miesenberger, K., Klaus, J., Zagler, W. L., Karshmer. A. I. (eds.) Computers Helping People with Special Needs, vol. 4061. Springer, Berlin, pp. 1298–1305 (2006)

  14. Tjan, B.S., Beckmann, P.J., Roy, R., Giudice, N., Legge, G.E.: Digital sign system for indoor way finding for the visually impaired. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Workshops, San Diego, CA, USA (2005)

  15. Coughlan, J., Manduchi, R., Shen, H.: Cell phone-based wayfinding for the visually impaired. In: Presented at the 1st International Workshop on Mobile Vision, Graz, Austria (2006)

  16. Biswas, J., Veloso, M.: WiFi localization and navigation for autonomous indoor mobile robots. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 4379–4384 (2010)

  17. Inoue, Y., Ikeda, T., Yamamoto, K., Yamashita, T., Sashima, A., Kurumatani, K.: Usability study of indoor mobile navigation system in commercial facilities. In: Proceedings of the Second International Workshop on Ubiquitous Systems Evaluation USE 2008 (2008)

  18. Kulyukin, V., Gharpure, C., Nicholson, J., Osborne, G.: Robot-assisted wayfinding for the visually impaired in structured indoor environments. Auton. Robots. 21(1), 29–41 (2006)

    Google Scholar 

  19. Anjum, S.: Place recognition for indoor blind navigation. Senior Honoris Thesis, 2010–2011, Carnegie Mellon University in Qatar. http://www.cs.cmu.edu/afs/cs/user/mjs/ftp/thesis-program/2011/theses/qatar-anjum.pdf

  20. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1052–1067 (2007)

    Google Scholar 

  21. Longuet-Higgins, H.C.: A computer algorithm for reconstructing a scene from two projections. Nature 293(5828), 133–135 (1981)

    Article  Google Scholar 

  22. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  23. MorJ, J.: The Levenberg–Marquardt algorithm: implementation and theory. In: Watson, G.A. (ed.) Numerical Analysis, vol. 630. Springer, Berlin, pp. 105–116 (1978)

  24. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF) original publication. Comput. Vis. Image Underst. 110(3), 346–359 (2008)

    Article  Google Scholar 

  25. Lowe, D.G.: Object recognition from local scale-invariant features. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999)

Download references

Acknowledgments

The authors gratefully acknowledge the financial support of National Science Foundation of China (No. 61100170) and the Fundamental Research Funds for the Central Universities of China (No. 12lgpy37). They are especially grateful to the anonymous reviewers of this paper for their invaluable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dah-Jye Lee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, D., Lee, DJ. & Taylor, B. Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired. Machine Vision and Applications 25, 811–822 (2014). https://doi.org/10.1007/s00138-013-0575-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-013-0575-0

Keywords

Navigation