Skip to main content
Log in

Haptic access to conventional 2D maps for the visually impaired

  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

This paper describes a framework of map image analysis and presentation of the semantic information to blind users using alternative modalities (i.e. haptics and audio). The resulting haptic-audio representation of the map is used by the blind for navigation and path planning purposes. The proposed framework utilizes novel algorithms for the segmentation of the map images using morphological filters that are able to provide indexed information on both the street network structure and the positions of the street names in the map. Next, off-the-shelf OCR and TTS algorithms are utilized to convert the visual information of the street names into audio messages. Finally, a grooved-line-map representation of the map network is generated and the blind users are able to investigate it using a haptic device. While navigating, audio messages are displayed providing information about the current position of the user (e.g. street name, cross-road notification and so on). Experimental results illustrate that the proposed system is considered very promising for the blind users and has been reported to be a very fast means of generating maps for the blind when compared to other traditional methods like Braille images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. “Disability INformation Resources (DINF)”. http://www.dinf.ne.jp/doc/english/Us_Eu/conf/csun_98/.13

  2. H. Tang and D. J. Beebe, “An Oral Tactile Interface for Blind Navigation”,IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, pp. 116–123, March 2006. 13

    Article  Google Scholar 

  3. V. Gaudissart, S. Ferreira, C. Thillou, and B. Gosselin, “SYPOLE: Mobile Reading Assistant for Blind People”, inProceedings of European Signal Processing Conference (EUSIPCO 2005), (Antalya, Turkey), 2005. 13

  4. Scalable Vector Graphics (SVG) 1.1 Specification W3C Recommendation, January 2003. 13

  5. L. W. Ching and M. K. Leung, “SINVI: Smart Indoor Navigation for the Visually Impaired”, in8th International Conference on Control, Automation, Robotics and Vision Kunming, China, December 6–9 2004. 13

  6. D. Tzovaras, G. Nikolakis, G. Fergadis, S. Malasiotis, and M. Stavrakis, “Design and Implementation of Haptic Virtual Environments for the Training of the Visually Impaired”,IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 12, pp. 266–278, June 2004. 13

    Article  Google Scholar 

  7. A. Chalechale, G. Naghdy, and A. Mertins, “Sketch-Based Image Matching Using Angular Partitioning”,IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and Humans, vol. 35, pp. 28–41, January 2005. 14

    Article  Google Scholar 

  8. A. Chalechale, G. Naghdy, and A. Mertins, “Sketch-Based Image Retrieval Using Angular Partitioning”, inProceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology (ISSPIT 2003), pp. 668–671, December 2003. 14

  9. N. Grammalidis, N. Sarris, F. Deligianni, and M. G. Strintzis, “Three Dimensional Facial Adaptation for MPEG-4 Talking Heads”,EURASIP Journal on Applied Signal Processing, Special Issue on Signal Processing for 3D Imaging and Virtual Reality, vol. 2002, pp. 1005–1020, October 2002. 14

    MATH  Google Scholar 

  10. P. Salembier, L. Garrido, and A. Oliveras, “Region-based filtering of images and video sequences: a morphological viewpoint”, inUPC Barcelona SPAIN, May 2001. 14

  11. P. Salembier, A. Oliveras, and L. Garrido, “Anti-extensive connected operators for image and sequence processing”,IEEE Transactions on Image Processing, vol. 7, pp. 555–570, April 1998. 14

    Article  Google Scholar 

  12. P. Salembier and F. Marqués, “Region-based representations of image and video: Segmentation tools for multimedia services”,IEEE Transactions on circuits and systems for video technology, vol. 9, pp. 1147–1169, December 1999. 14

    Article  Google Scholar 

  13. R. Ramloll, W. Yu, S. Brewster, B. Riedel, M. Burton, and G. Dimigen, “Constructing sonified haptic line graphs for the blind student: First steps.”, inACM conference on Assistive technologies, (Arlington, USA), 2000. 16, 17

  14. C. Thillou, S. Ferreira, and B. Gosselin, “An embedded application for degraded text recognition”,Eurasip Journal on Applied Signal Processing, Special Issue on Advances in Intelligent Vision Systems: methods and applications, vol. 13, no. Number, pp. 2127–2135, 2005. 16

    Google Scholar 

  15. B. Gosselin,Application de réseaux de neurones artificiels à la reconnaissance automatique de caractères manuscrits. PhD thesis, Faculté Polytechnique de Mons, 1996. 17

  16. “031.gr Desktop”. http://www.031.gr. 17

  17. “Google Maps”. http://maps.google.com/. 17, 18

  18. D. Ruspini, K. Kolarov, and O. Khatib, “The Haptic Display of Complex Graphical Environments”, inComputer Graphics (SIGGRAPH’97 Conf. Proc.), pp. 345–352, 1997. 17

  19. V. Levenshtein, “Binary codes capable of correcting deletions, insertions and reversals”,Soviet Physics Doklady, vol. 10, no. 8, pp. 707–710, 1966. 18

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konstantinos Kostopoulos.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kostopoulos, K., Moustakas, K., Tzovaras, D. et al. Haptic access to conventional 2D maps for the visually impaired. J Multimodal User Interfaces 1, 13–19 (2007). https://doi.org/10.1007/BF02910055

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02910055

Keywords

Navigation