Abstract
Currently, there is a growing public interest in improving the quality of life of people with disabilities, being the visual limitation one of them, where different research projects have been developed. Assistance robotics is a branch dedicated to the support in mobility and rehabilitation of people with visual disabilities and other limitations. This work describes the construction and use of a robotic cane to assist people with visual problems. The robot structure is generated by 3D printing, and the electronic system has been designed based on Arduino technology. The robot features include a sensor distance to detect possible collisions, a GPS to track its movements, and two DC motors in caterpillar-like configuration for cane mobility. In addition, the robot has connectivity with mobile devices through Bluetooth communication, where the mobile application coordinates the movements of the robot in two ways, manual and autonomous, allowing to reach the desired location and sending the user’s location to the web. This proposal is tested in a structured environment so that patients cast their perspective through a usability test and their characteristics are examined through the analysis of an expert.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The interaction of a living being with its surroundings is carried out through the use of the senses, being the vision one of the fundamental ones that allow both the manipulation of objects and the displacement in spaces, whether they are structured or unstructured [1]. However, a condition in the visual sense greatly limits an individual in the way he/she interacts with his environment. In fact, until 2010 approximately 285 million people with visual disabilities were reported in the world, of which 39 million are blind and 246 million have a low vision [2]. To mitigate this problem and provide opportunities for inclusion to people with these limitations, appropriate languages, methods, and techniques have been developed for people to use other senses such as touch and hearing to compensate for visual impairment [3]. Mainly, independent displacement is a challenge for people with vision problems, particularly in unstructured or unknown environments. In this case, other people or guide dogs are required to support the blind, however, the first solution involves costs that often are not in the possibilities of the disabled, while guide dogs can help evade obstacles but being red-green color blind and unable to interpret street signs could present problems for guiding [4]. From another perspective, navigation assistance technologies try to provide additional support to users by increasing recognition of their environment or serving as a guide in places that have adequate infrastructure for blind people, but precise solutions are not yet widely available in cases where cities are not adapted for this type of people, especially in developing countries [5]. For its part, assistance robotics seeks to provide a greater benefit for people with visual disabilities through robots that can be customized according to the blind, considering characteristics such as their level of blindness, age, height, and so on [6], this with the objective of providing greater opportunities for disabled people, giving them independence and improving their inclusion in society [7].
Apart from the solutions given for the industry, robotics has evolved to support society. Assistance robotics tries to provide support in daily tasks that are developed by people who commonly do not have any type of disability. In this context, in recent years assistance robots have given significant support to people suffering from some type of disability, either to improve their lifestyle at the time of performing domestic work or improving rehabilitation techniques that speed up a patient’s recovery [8]. On the other hand, the use of robotics for the assistance of people with visual disabilities has had an important contribution in recent years, whether through camera arrangements, presence sensors or some type of robotic stick, the same that can have some kind of feedback, either sound or through forces.
The proposals that support blind people in the field of robotics are diverse. In this regard, in the case where blind people have to interact in buildings such as hotels and shops, solutions such as the one shown in [9] are presented, where the implemented robot is focused on welcoming the user, showing three forms of assistance (sighted guide, escort, and Information Kiosk), in addition to receiving infrastructure information from the robot. With the condition that the robotic mechanism knows the building’s layout, the mechanism can provide assistance that does not depend on third parties to facilitate the location of the affected person within a highly traveled area. Likewise, some works have focused on the latest technology lenses to support people with visual limitations. [10] raises the use of intelligent Smart glass for visually impaired using Deep learning machine vision techniques and ROS (Robot Operating System). The work presents the fusion of GPS devices, head-phones, ultrasonic, and an information processor (Raspberry Pi Zero) to provide visual support for the recognition of objects present in the path of a user with visual limitations. A work of similar characteristics is presented by [11], where the arrangement of wireless sensors connected to a main camera and a speaker allows detecting obstacles through the recognition of spaces where the user walks. Solutions based on robots that move on the ground to detect obstacles in the area on which users walk are other forms of support for blind users. Indeed, a system that can equip the traditional canes with sensors that detect obstacles beyond the reach of the patient’s hands can foresee dangers when traveling over unknown spaces. In this case, [12,13,14] propose a cane-type robot as a guide in semi-structured interior spaces. Unlike navigation with guide dogs, users of this type of proposals claim that it increases their confidence, sense of security, and confidence with this type of robots that are shown as the solution to free mobility and sense of independence for blind people, especially for places which don’t have the necessary infrastructure to guide these types of people.
The present work shows the design of an intelligent cane which serves as a support for the user to overcome obstacles positioned along the path of his walk. In order to achieve mobilization in unfamiliar areas, the designed cane has an ultrasonic sensor, a GPS module, sound notification devices, vibration devices, and a structure based on two engines with a caterpillar-type traction system. This document is divided into 6 sections. In the first section, the introduction and state of the art are presented. The second section mentions the formulation of the problem, while the fourth shows the development of the proposal. The fifth section indicates the results obtained and finally, the sixth section presents the Discussions and Conclusions.
2 Problem Formulation
The purpose of the robotic cane is to facilitate the mobility of people with visual disabilities, for which it is important to design the robot based on the requirements of the visually impaired. Figure 1 shows the components of the proposed robot, presenting an Arduino Nano as a system information processor. The components required in the robot are an ultrasonic sensor for the evasion of obstacles, a GPS module for localization, a Bluetooth module for external connectivity, a motor driver and 2 DC motors for mobility, and a vibration motor to alert potential shocks.
Regarding connectivity, it is intended to link the robot to a mobile device via a wireless connection that allows bidirectional communication. The application of the mobile device controls the actions of the robot according to the mobility needs of the cane, for this, the app uses the basic motion equations of the mobile robot. In addition, the App takes GPS data to record and share data, so that the users’ supervisory can access the information collected on the robot through the web.
3 Proposal Development
3.1 Motion Equations
The movements of the robot are executed by means of a unicycle-type mobile located at the end of the cane, so that it pulls the user in the desired direction. To improve traction on uneven terrain, the mobile robot is designed using caterpillars, although this proposal focuses on structured work spaces. Figure 2 shows the parameters for the movements of the mobile robot.
The movements allowed in a mobile unicycle type robot are: linear speed (1), which moves the robot forward and backward; and, angular speed (2), which allows the robot to rotate on its own axis. The equations of motion are a function of the angular velocity of the right motor ωd, the angular velocity of the left motor ωi, the radius of the caterpillar r, and the distance between caterpillar d.
3.2 Hardware
The physical components of the robot are designed in a way that optimizes space and reduces weight to facilitate movement. Figure 3 shows the 3D model made in Tinkercad; in the front part the spaces to locate the distance sensor are observed, in the sides the driving gears and support gears are located to install the caterpillar, in the upper part each space for the electrical elements is distinguished; At the rear, the perforation is observed to insert the cane tube. All these components are generated by 3D printing with polyacid lactic (PLA).
Another of the hardware-related systems is the electronic circuit, the components used are low cost (60–80 USD). In Fig. 4, the proposed circuit implemented in the Fritzing software can be seen, all the input and output elements are connected to the Arduino Nano. A 9 V battery provides the power supply for the operation of the sensors, actuators, and controllers; two motors of continuous current and a driver TB6612NG that allows to control the speed and the direction of rotation; a Bluetooth module (HC-06) connects directly to the serial communication pins; the vibration motor is controlled using a BJT transistor, with a 1 K resistor connected to the transistor base; distance (HC-SR04) and positioning (GPS) sensors use digital pins for data reading.
3.3 Software
The robotic cane works with two programs, the main program installed in the Arduino software, and the user interface developed in a mobile application. Figure 5 presents the main program algorithm. At startup, the configuration of the robot’s outputs and inputs is made; the sensors then read data and send the robot’s position to the mobile device via Bluetooth; the robot receives the App’s orders according to the movements required to move to the desired location. In case of possible collision, the robot sends an alert to the user through shaking actions and corrects the movement by rotation.
The mobile application is developed in App Inventor 2, with support for mobile devices with Android operating system. Figure 6 shows the screens that make up the user interface of the App; in the start window the user selects the type of control to be performed. Manual control allows to command the robot movements by pressing buttons, this option is for the caregiver, allowing the user to be guided as required; and autonomous control (tracking) allows you to choose a location on the map, in this case the application sends the movement orders to move to the destination. In addition, the mobile application graphs all robot positions and records them in a non-SQL database (Firebase), which is accessed from a web browser connected to the Internet.
4 Results
4.1 Functional Tests
The robot is built, starting from the printing of the 3D components, installing the electronic circuit and running the applications. Figure 7 shows the assembled mobile robot, in the top part is located the power switch, the motors have been fixed to the traction system, and all the electronic components are contained inside. Experimental tests are carried out on the sensors and actuators that make up the robot, showing the correct functioning of the system.
Evaluating the control of the movement in a structured environment (the consistency of the ground and the obstacles to evade is known) a destination is set in the mobile application and this sends the orders to the robotic cane. The route traveled by the robot is observed in Fig. 8, the application screen shows the path followed by the robot to reach the desired position with the instructions generated, where obstacles are avoided as they appear, so, the data sent to the database is displayed on the web using latitude and longitude coordinates.
4.2 Participants
To study the proposal from the user’s point of view, tests were carried out on 4 patients with visual impairment who belong to the “Association of blind workers of Ambato”, who were interested in the possibility of being assisted daily by this technology, but other members of the association refused to participate in the tests due to fear of accident. Figure 9 shows the participants using the robotic stick in the experiments performed.
4.3 Usability Test
Once the experiments have been carried out on the patients, the System Usability Scale SUS is applied, which allows measuring the level of compliance of the users with the benefits of the proposal. Table 1 shows the results of the usability test with a score of 68.125/100 of usability, this implies a certain disagreement with the characteristics of the robotic cane, especially regarding the frequent use and safety provided by the robot, this is because the users have no control of the movements and this creates fear and insecurity, according to the patients.
4.4 Medical Perspective
To analyze this proposal from a medical perspective, an optometrist doctor is interviewed, who witnessed the experiments, the analysis is carried out based on four topics detailed below:
-
Design of the robotic cane. The system only detects a frontal point of depth, this is not favorable because the obstacles have irregular shapes and generate several levels of depth that must be considered in evasion. In addition, users of conventional canes are not used to being dragged and prefer to have control of their movements. However, the proposal can be useful in patients in the training stage, who could get used to this mobility method.
-
Needs of the patient. This proposal has useful components to cover the needs of patients with visual impairment, but user comfort must be considered when directing it to the desired destination, so that it must support its mobility instead of making it independent.
-
Technology built into the cane. The world is experiencing a technological era and the assistance teams must have the greatest amount of technologies incorporated. Specifically, this robot has a good advance when using a mobile device as a support tool, but requires connectivity with more external components that help in the mobility of the person with visual impairment.
-
Improvement. The system should fully scan the surrounding environment, including soil characteristics, and then determine the best movements to reach the destination. In addition, the user should have control of the robot’s speed, this could be done manually on the handle of the cane. Finally, obstacle feedback should be subtler, so that the patient does not panic in case of possible collisions.
5 Discussions and Conclusions
Assistive technologies are constantly developing, so new products are continuously implemented to improve the quality of life of people with disabilities. Patients with visual impairment require daily mobility aids, these devices can be incorporated into the conventional cane and improve the assistance characteristics. In this work a robotic cane is built based on a caterpillar mobile robot, so that it moves the patient to their destination, for this an obstacle sensor and a GPS have been incorporated. The robot is controlled by a mobile application that provides the tools to direct the movements manually or autonomously, the robot’s position is always sent to a database of a web server. Experiments in real patients show that the cane meets the objectives set, and the usability test indicates some dissatisfaction with some characteristics of the robot, specifically with the insecurity caused by the actions of the cane, because the user is unaware of the movements the robot will perform. Finally, a specialist doctor recognizes the positive aspects of the proposal and criticizes the shortcomings, highlighting the lack of environment data and a movement speed control for the user.
The revised bibliography describes works with sensors and actuators similar to this proposal, displacing robots in structured and semi-structured environments, marking the main difference with this work in the use of a mobile device as a support element to control and direct the movements of the robot, including the database that continuously reports the position of the robot. In addition, this work is carried out using low-cost devices, unlike the laboratory robots used in others research, this allows the prototype to be replicated if necessary. Finally, the criticisms stated in the proposal serve as a starting point for new products and future work.
References
Krishna, S., Panchanathan, S.: Assistive technologies as effective mediators in interpersonal social interactions for persons with visual disability. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2010. LNCS, vol. 6180, pp. 316–323. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14100-3_47
Pascolini, D., Mariotti, S.: Global data on visual impairments. Br. J. Ophthalmol. (2010). https://doi.org/10.1136/bjophthalmol-2011-300539
Rajapandian, B., Harini, V., Raksha, D., Sangeetha, V.: A novel approach as an AID for blind, deaf and dumb people. In: Proceedings of 2017 3rd IEEE International Conference on Sensing, Signal Processing and Security, ICSSS 2017, pp. 403–408. Institute of Electrical and Electronics Engineers Inc. (2017). https://doi.org/10.1109/SSPS.2017.8071628
Park, D., et al.: Active robot-assisted feeding with a general-purpose mobile manipulator: design, evaluation, and lessons learned. Robot. Auton. Syst. 124 (2020). https://doi.org/10.1016/j.robot.2019.103344
Herrera, D., Roberti, F., Carelli, R., Andaluz, V., Varela, J., Ortiz, J.: Modeling and path-following control of a wheelchair in human-shared environments. Int. J. Humanoid Robot. 15, 1–33 (2018). https://doi.org/10.1142/S021984361850010X
Guerreiro, J., Sato, D., Ahmetovic, D., Ohn-Bar, E., Kitani, K.M., Asakawa, C.: Virtual navigation for blind people: Transferring route knowledge to the real-World. Int. J. Hum. Comput. Stud. 135 (2020). https://doi.org/10.1016/j.ijhcs.2019.102369
Bolotnikova, A., Courtois, S., Kheddar, A.: Multi-contact planning on humans for physical assistance by humanoid. IEEE Robot. Autom. Lett., 1–8 (2019). https://doi.org/10.1109/lra.2019.2947907
Bonani, M., Oliveira, R., Correia, Filipa Rodrigues, A., Guerreiro, T., Paiva, A.: What my eyes can’t see, a robot can show me: exploring the collaboration between blind people and robots. In: The 20th International ACM SIGACCESS Conference, pp. 15–27 (2018). https://doi.org/10.1145/3234695.3239330
Mohammed, S., Park, H.W., Park, C.H., Amirat, Y., Argall, B.: Special issue on assistive and rehabilitation robotics. Auton. Robots, 513–517 (2017). https://doi.org/10.1007/s10514-017-9627-z
Azenkot, S., Feng, C., Cakmak, M.: Enabling building service robots to guide blind people a participatory design approach. In: 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE (2016). https://doi.org/10.1109/HRI.2016.7451727
Suresh, A., Arora, C., Laha, D., Gaba, D., Bhambri, S.: Intelligent smart glass for visually impaired using deep learning machine vision techniques and Robot Operating System (ROS). In: Kim, J.-H., et al. (eds.) RiTA 2017. AISC, vol. 751, pp. 99–112. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-78452-6_10
Vera, D., Marcillo, D., Pereira, A.: Blind guide: anytime, anywhere solution for guiding blind people. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST 2017. AISC, vol. 570, pp. 353–363. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56538-5_36
Guerreiro, J., Sato, D., Asakawa, S., Dong, H., Kitani, K.M., Asakawa, C.: CaBot: designing and evaluating an autonomous. In: ASSETS 2019: The 21st International ACM SIGACCESS Conference on Computers and Accessibility, pp. 68–82 (2019). https://doi.org/10.1145/3308561.3353771
Megalingam, R.K., Vishnu, S., Sasikumar, V., Sreekumar, S.: autonomous path guiding robot for visually impaired people. In: Mallick, P.K., Balas, V.E., Bhoi, A.K., Zobaa, Ahmed F. (eds.) Cognitive Informatics and Soft Computing. AISC, vol. 768, pp. 257–266. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-0617-4_25
Noman, A.T., Chowdhury, M.A.M., Rashid, H., Rahman Faisal, S.M.S., Ahmed, I.U., Reza, S.M.T.: Design and implementation of microcontroller based assistive robot for person with blind autism and visual impairment. In: 20th International Conference of Computer and Information Technology (ICCIT). IEEE (2017). https://doi.org/10.1109/ICCITECHN.2017.8281806
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Varela-Aldás, J., Guamán, J., Paredes, B., Chicaiza, F.A. (2020). Robotic Cane for the Visually Impaired. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Design Approaches and Supporting Technologies. HCII 2020. Lecture Notes in Computer Science(), vol 12188. Springer, Cham. https://doi.org/10.1007/978-3-030-49282-3_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-49282-3_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49281-6
Online ISBN: 978-3-030-49282-3
eBook Packages: Computer ScienceComputer Science (R0)