loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Michael Hild and Fei Cheng

Affiliation: Osaka Electro-Communication University, Japan

Keyword(s): Visual Feedback, Grasping, Human as Actuator, Commands-by-Voice.

Related Ontology Subjects/Areas/Topics: Applications and Services ; Computer Vision, Visualization and Computer Graphics ; Enterprise Information Systems ; Human and Computer Interaction ; Human-Computer Interaction ; Mobile Imaging ; Motion, Tracking and Stereo Vision ; Tracking and Visual Navigation

Abstract: We propose a system for guiding a visually impaired person toward a target product on a store shelf using visual–auditory feedback. The system uses a hand–held, monopod–mounted CCD camera as its sensor and recognizes a target product in the images using sparse feature vector matching. Processing is divided into two phases: In Phase1, the system acquires an image, recognizes the target product, and computes the product location on the image. Based on the location data, it issues a voice–based command to the user in response to which the user moves the camera closer toward the target product and adjusts the direction of the camera in order to keep the target product in the camera’s field of view. When the user’s hand has reached grasping range, the system enters Phase 2 in which it guides the user’s hand to the target product. The system is able to keep the camera’s direction steady during grasping even though the user has a tendency of unintentionally rotating the camera because of th e twisting of his upper body while reaching out for the product. Camera direction correction is made possible due to utilization of a digital compass attached to the camera. The system is also able to guide the user’s hand right in front of the product even though the exact product position cannot be determined directly at the last stage because the product disappears behind the user’s hand. Experiments with our prototype system show that system performance is highly reliable in Phase 1 and reasonably reliable in Phase 2. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 44.213.126.40

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Hild, M. and Cheng, F. (2014). Grasping Guidance for Visually Impaired Persons based on Computed Visual-auditory Feedback. In Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISIGRAPP 2014) - Volume 3: VISAPP; ISBN 978-989-758-009-3; ISSN 2184-4321, SciTePress, pages 75-82. DOI: 10.5220/0004653200750082

@conference{visapp14,
author={Michael Hild. and Fei Cheng.},
title={Grasping Guidance for Visually Impaired Persons based on Computed Visual-auditory Feedback},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISIGRAPP 2014) - Volume 3: VISAPP},
year={2014},
pages={75-82},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004653200750082},
isbn={978-989-758-009-3},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISIGRAPP 2014) - Volume 3: VISAPP
TI - Grasping Guidance for Visually Impaired Persons based on Computed Visual-auditory Feedback
SN - 978-989-758-009-3
IS - 2184-4321
AU - Hild, M.
AU - Cheng, F.
PY - 2014
SP - 75
EP - 82
DO - 10.5220/0004653200750082
PB - SciTePress