Skip to main content

Environment Description for Blind People

  • Conference paper
  • First Online:
Soft Computing Applications (SOFA 2016)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 633))

Included in the following conference series:

Abstract

Visual processing is very efficient, letting people to use vision as the first approach to get information about environment. For blind people that information must be complemented with another very powerful data collection: sounds. In order to complement the white stick sounds, the prototype HOLOTECH gathers and segments video images and produces specific sounds to acknowledge about potential hazards. The underlaying model is based on a set of Neural Networks coordinated by an Expert System to make it possible to react to any new event in real time. This paper presents an outline of the model, the project and a test set to evaluate one of the Neural Networks specialized to detect and evaluate faces and other objects like cars. The main contribution of this work is automate the selection model for proper combination of information, discarding unnecessary data and defining the minimum precision requirements to fulfill the current goal.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lahav, O., Schloerb, D.W.: Virtual environments for people who are visually impaired integrated into an orientation and mobility program. Am. J. Vis. Impair. (2015)

    Google Scholar 

  2. Kelly, S.M., Ajuwon, P.M., Wolffe, K.E.: The recreation and leisure pursuits of employed adults with visual impairments in Nigeria: part 1. Am. J. Vis. Impair. (2014)

    Google Scholar 

  3. Szubielska, M., Marek, B.: The role of visual experience in changing the size of objects in imagery processing. Am. J. Vis. Impair. (2015)

    Google Scholar 

  4. Erin, J.N.: Practice perspectives. Different paths to success: an individualized approach to effective teaching. Am. J. Vis. Impair. (2015)

    Google Scholar 

  5. Hersh, M.A.: The design and evaluation of assistive technology products and devices. Part 1: design. Int. Encycl. Rehabil. (2010)

    Google Scholar 

  6. Faria, J., Lopes, S., Fernandes, H., Martins, P., Barroso, J.: Electronic white cane for blind people navigation assistance. In: 2010 World Automation Congress (WAC), pp. 1–7 (2010)

    Google Scholar 

  7. Ali, A.M., Nordin, M.J.: SIFT based monocular SLAM with multi-clouds features for indoor navigation. In: TENCON 2010 - 2010 IEEE Region 10 Conference, pp. 2326–2331 (2010)

    Google Scholar 

  8. Lahav, O.: Improving orientation and mobility skills through virtual environments for people who are blind: past research and future potential. In: Proceedings of the 9th International Conference on Disability, Virtual Reality & Associated Technologies Laval, France, 10–12 September 2012

    Google Scholar 

  9. Tihamér, S., Brassa, I.: Assistive Technologies for Visually Impaired People (2011)

    Google Scholar 

  10. Evangeline, J.: Guide systems for the blind pedestrian positioning and artificial vision. IJISET – Int. J. Innov. Sci. Eng. Technol. 1(3), 42–44 (2014)

    Google Scholar 

  11. Duarte, K., Cecilio, J., Sá Silva, J., Furtado, P.: Information and assisted navigation system for blind people. In: Proceedings of the 8th International Conference on Sensing Technology, Liverpool, UK, 2–4 September 2014

    Google Scholar 

  12. Arditi, A., YingLi, T.: User interface preference in the design of a camera-base navigation and wayfinding aid. J. Vis. Impair. Blind. 107, 118 (2013)

    Google Scholar 

  13. Salazar, E., Ceres, R.: El sensor ultrasónico como potenciador de procesos comunicativos en personas con limitación visual. In: Congreso Iberoamericano de Comunicación Alternativa y Aumentativa, Lisboa (1993)

    Google Scholar 

  14. Gueuning, F.E., Varlan, M., Eugene, C.E., Dupuis, P.: Accurate distance measurement by an autonomous ultrasonic system combining time-of-flight and phase-shift methods. IEEE Trans. Instrum. Meas. 46 (1997)

    Google Scholar 

  15. Velázquez, R.: Wearable assistive devices for the blind. In: Lay-Ekuakille, A., Mukhopadhyay, S.C. (eds.) Wearable and Autonomous Biomedical Devices and Systems for Smart Environment: Issues and Characterization. LNEE, vol. 75, pp. 331–349. Springer (2010)

    Google Scholar 

  16. Arduino Nano. http://arduino.cc/en/Main/ArduinoBoardNano (2015)

  17. Igoe, T.: Making Things Talk. 2nd edn. (2012)

    Google Scholar 

  18. Krasula, L., Klima, M., Rogard, E., Jeanblanc, E.: MATLAB-based Applications for Image Processing and Image Quality Assessment Part II. Experimental Results (2012)

    Google Scholar 

  19. Bala, A.: An improved watershed image segmentation technique using MATLAB. Int. J. Sci. Eng. Res. 3(6), 1–4 (2012)

    Google Scholar 

  20. http://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html (2015)

  21. Park, J.S., De Luise, D.L., Pérez, J.: HOLOTECH prototype. Sound language for environment’s understanding. Int. J. Learn. Technol. Interscience Publishers (2015)

    Google Scholar 

  22. De Luise, D.L.: Ingeniería en inteligencia computacional (Computational Intelligence Engineering). In: Rovarini, P. (ed.) (UTN-FRT), pp. 104 (2012). Zadeh, L.A.: Interpolative reasoning as a common basis for inference in fuzzy logic, neural network theory and the calculus of fuzzy If/Then rules. Opening talk. In: Proceedings of 2nd International Conference on Fuzzy Logic and Neural Networks, Iizuka, pp. XIII–XIV (1992)

    Google Scholar 

  23. Chen, Q., Kotani, K., Lee, F., Ohmi, T.: A fast search algorithm for large video database using HOG based features

    Google Scholar 

  24. Mahdi, H.S.: Image Understanding Using Object Identification and Spatial Relationship

    Google Scholar 

  25. https://cogcomp.cs.illinois.edu/Data/Car/

  26. Carletta, J.: Assessing agreement on classification tasks: the kappa statistic. Comput. Linguist. 22(2), 249–254 (1996)

    Google Scholar 

  27. Abada, L., Aouat, S.: Facial shape-from-shading using features detection method, pp. 3–19. doi:10.1504/IJAIP.2016.074774

  28. Bairagi, B., Dey, B., Sarkar, B., Sanyal, S.: Selection of robotic systems in fuzzy multi criteria decision-making environment, pp. 32–42. doi:10.1504/IJCSYSE.2015.067798

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. López De Luise .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Park, J.S., De Luise, D.L., Hemanth, D.J., Pérez, J. (2018). Environment Description for Blind People. In: Balas, V., Jain, L., Balas, M. (eds) Soft Computing Applications. SOFA 2016. Advances in Intelligent Systems and Computing, vol 633. Springer, Cham. https://doi.org/10.1007/978-3-319-62521-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-62521-8_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-62520-1

  • Online ISBN: 978-3-319-62521-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics