Authors:
Martin Eckert
;
Matthias Blex
and
Christoph M. Friedrich
Affiliation:
University of Applied Sciences and Arts Dortmund, Germany
Keyword(s):
Sensor Substitution, Spatial Audio, Object Detection, Convolutional Neural Networks, Mixed Reality.
Related
Ontology
Subjects/Areas/Topics:
Biomedical Engineering
;
Biomedical Signal Processing
;
Development of Assistive Technology
;
Devices
;
Health Information Systems
;
Human-Computer Interaction
;
Human-Machine Interfaces for Disabled Persons
;
Pattern Recognition and Machine Learning
;
Physiological Computing Systems
;
Wearable Sensors and Systems
Abstract:
Finding basic objects on a daily basis is a difficult but common task for blind people. This paper demonstrates
the implementation of a wearable, deep learning backed, object detection approach in the context of visual
impairment or blindness. The prototype aims to substitute the impaired eye of the user and replace it with
technical sensors. By scanning its surroundings, the prototype provides a situational overview of objects
around the device. Object detection has been implemented using a near real-time, deep learning model named
YOLOv2. The model supports the detection of 9000 objects. The prototype can display and read out the name
of augmented objects which can be selected by voice commands and used as directional guides for the user,
using 3D audio feedback. A distance announcement of a selected object is derived from the HoloLens’s spatial
model. The wearable solution offers the opportunity to efficiently locate objects to support orientation without
extensive traini
ng of the user. Preliminary evaluation covered the detection rate of speech recognition and the
response times of the server.
(More)