Abstract:
Globally, at least 2.2 billion people have near or distant vision impairment. Those who live in low- and middle-income regions are 4x more likely to have vision impairmen...Show MoreMetadata
Abstract:
Globally, at least 2.2 billion people have near or distant vision impairment. Those who live in low- and middle-income regions are 4x more likely to have vision impairment than those in high income regions. Because many visually impaired life in third world countries problems such as war, famine and poverty create an increasingly hostile environment and present risks where their lives could be at risk due to dangers and dangerous people. The main aim of this research is to explore an assistive device that is inexpensive, mobile and life enhancing to its user. This will help the user to move and explore their environment without limits and dramatically improve their quality of life. The assistive device in this paper uses smart technology such as sensors, cameras and voice commands to help the visually impaired identify and avoid potential obstacles/dangers as they move to contribute to their decision making. This will be done using Deep learning and the TensorFlow lite model, trained using a custom dataset of images that belong to two categories “Threat” person or “Neutral” person. The main microcontroller is the Raspberry Pi which is a small electronic device that acts as the interface between all the sensors and hosts the Deep learning model enabling a determination of the type of person encountered within 1.2m of the visually impaired person.
Published in: 2022 International Conference on Smart Applications, Communications and Networking (SmartNets)
Date of Conference: 29 November 2022 - 01 December 2022
Date Added to IEEE Xplore: 03 January 2023
ISBN Information: