Abstract
Visual processing is very efficient, letting people to use vision as the first approach to get information about environment. For blind people that information must be complemented with another very powerful data collection: sounds. In order to complement the white stick sounds, the prototype HOLOTECH gathers and segments video images and produces specific sounds to acknowledge about potential hazards. The underlaying model is based on a set of Neural Networks coordinated by an Expert System to make it possible to react to any new event in real time. This paper presents an outline of the model, the project and a test set to evaluate one of the Neural Networks specialized to detect and evaluate faces and other objects like cars. The main contribution of this work is automate the selection model for proper combination of information, discarding unnecessary data and defining the minimum precision requirements to fulfill the current goal.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lahav, O., Schloerb, D.W.: Virtual environments for people who are visually impaired integrated into an orientation and mobility program. Am. J. Vis. Impair. (2015)
Kelly, S.M., Ajuwon, P.M., Wolffe, K.E.: The recreation and leisure pursuits of employed adults with visual impairments in Nigeria: part 1. Am. J. Vis. Impair. (2014)
Szubielska, M., Marek, B.: The role of visual experience in changing the size of objects in imagery processing. Am. J. Vis. Impair. (2015)
Erin, J.N.: Practice perspectives. Different paths to success: an individualized approach to effective teaching. Am. J. Vis. Impair. (2015)
Hersh, M.A.: The design and evaluation of assistive technology products and devices. Part 1: design. Int. Encycl. Rehabil. (2010)
Faria, J., Lopes, S., Fernandes, H., Martins, P., Barroso, J.: Electronic white cane for blind people navigation assistance. In: 2010 World Automation Congress (WAC), pp. 1–7 (2010)
Ali, A.M., Nordin, M.J.: SIFT based monocular SLAM with multi-clouds features for indoor navigation. In: TENCON 2010 - 2010 IEEE Region 10 Conference, pp. 2326–2331 (2010)
Lahav, O.: Improving orientation and mobility skills through virtual environments for people who are blind: past research and future potential. In: Proceedings of the 9th International Conference on Disability, Virtual Reality & Associated Technologies Laval, France, 10–12 September 2012
Tihamér, S., Brassa, I.: Assistive Technologies for Visually Impaired People (2011)
Evangeline, J.: Guide systems for the blind pedestrian positioning and artificial vision. IJISET – Int. J. Innov. Sci. Eng. Technol. 1(3), 42–44 (2014)
Duarte, K., Cecilio, J., Sá Silva, J., Furtado, P.: Information and assisted navigation system for blind people. In: Proceedings of the 8th International Conference on Sensing Technology, Liverpool, UK, 2–4 September 2014
Arditi, A., YingLi, T.: User interface preference in the design of a camera-base navigation and wayfinding aid. J. Vis. Impair. Blind. 107, 118 (2013)
Salazar, E., Ceres, R.: El sensor ultrasónico como potenciador de procesos comunicativos en personas con limitación visual. In: Congreso Iberoamericano de Comunicación Alternativa y Aumentativa, Lisboa (1993)
Gueuning, F.E., Varlan, M., Eugene, C.E., Dupuis, P.: Accurate distance measurement by an autonomous ultrasonic system combining time-of-flight and phase-shift methods. IEEE Trans. Instrum. Meas. 46 (1997)
Velázquez, R.: Wearable assistive devices for the blind. In: Lay-Ekuakille, A., Mukhopadhyay, S.C. (eds.) Wearable and Autonomous Biomedical Devices and Systems for Smart Environment: Issues and Characterization. LNEE, vol. 75, pp. 331–349. Springer (2010)
Arduino Nano. http://arduino.cc/en/Main/ArduinoBoardNano (2015)
Igoe, T.: Making Things Talk. 2nd edn. (2012)
Krasula, L., Klima, M., Rogard, E., Jeanblanc, E.: MATLAB-based Applications for Image Processing and Image Quality Assessment Part II. Experimental Results (2012)
Bala, A.: An improved watershed image segmentation technique using MATLAB. Int. J. Sci. Eng. Res. 3(6), 1–4 (2012)
http://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html (2015)
Park, J.S., De Luise, D.L., Pérez, J.: HOLOTECH prototype. Sound language for environment’s understanding. Int. J. Learn. Technol. Interscience Publishers (2015)
De Luise, D.L.: Ingeniería en inteligencia computacional (Computational Intelligence Engineering). In: Rovarini, P. (ed.) (UTN-FRT), pp. 104 (2012). Zadeh, L.A.: Interpolative reasoning as a common basis for inference in fuzzy logic, neural network theory and the calculus of fuzzy If/Then rules. Opening talk. In: Proceedings of 2nd International Conference on Fuzzy Logic and Neural Networks, Iizuka, pp. XIII–XIV (1992)
Chen, Q., Kotani, K., Lee, F., Ohmi, T.: A fast search algorithm for large video database using HOG based features
Mahdi, H.S.: Image Understanding Using Object Identification and Spatial Relationship
Carletta, J.: Assessing agreement on classification tasks: the kappa statistic. Comput. Linguist. 22(2), 249–254 (1996)
Abada, L., Aouat, S.: Facial shape-from-shading using features detection method, pp. 3–19. doi:10.1504/IJAIP.2016.074774
Bairagi, B., Dey, B., Sarkar, B., Sanyal, S.: Selection of robotic systems in fuzzy multi criteria decision-making environment, pp. 32–42. doi:10.1504/IJCSYSE.2015.067798
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Park, J.S., De Luise, D.L., Hemanth, D.J., Pérez, J. (2018). Environment Description for Blind People. In: Balas, V., Jain, L., Balas, M. (eds) Soft Computing Applications. SOFA 2016. Advances in Intelligent Systems and Computing, vol 633. Springer, Cham. https://doi.org/10.1007/978-3-319-62521-8_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-62521-8_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-62520-1
Online ISBN: 978-3-319-62521-8
eBook Packages: EngineeringEngineering (R0)