Loading [a11y]/accessibility-menu.js
Towards multi-modal context recognition for hearing instruments | IEEE Conference Publication | IEEE Xplore

Towards multi-modal context recognition for hearing instruments


Abstract:

Current hearing instruments (HI) only rely on auditory scene analysis to adapt to the situation of the user. It is for this reason that these systems are limited in the n...Show More

Abstract:

Current hearing instruments (HI) only rely on auditory scene analysis to adapt to the situation of the user. It is for this reason that these systems are limited in the number and type of situations they can detect. We investigate how context information derived from eye and head movements can be used to resolve such situations. We focus on two example problems that are challenging for current HIs: To distinguish concentrated from interaction, and to detect whether a person is walking alone or walking while having a conversation. We collect an eleven participant (6 male, 5 female, age 24–59) dataset that covers different typical office activities. Using person-independent training and isolated recognition we achieve an average precision of 71.7% (recall: 70.1%) for recognising concentrated work and 57.2% precision (recall: 81.3%) for detecting walking while conversing.
Date of Conference: 10-13 October 2010
Date Added to IEEE Xplore: 13 December 2010
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea (South)

Contact IEEE to Subscribe

References

References is not available for this document.