Exploiting transfer learning for personalized view invariant gesture recognition | IEEE Conference Publication | IEEE Xplore

Exploiting transfer learning for personalized view invariant gesture recognition


Abstract:

A robust gesture recognition system is an essential component in many human-computer interaction applications. In particular, the widespread adoption of portable devices ...Show More

Abstract:

A robust gesture recognition system is an essential component in many human-computer interaction applications. In particular, the widespread adoption of portable devices and the diffusion of autonomous systems with limited power and load capacity has increased the need of developing efficient recognition algorithms which operates on video streams recorded from low cost devices and which can cope with the challenging issue of point of view changes. A further challenge arises as different users tend to perform the same gesture with different styles and speeds. Thus a classifier trained with gestures data of certain set of users may work poorly when data from other users are being processed. However, as often a mobile device or a robot are intended to be used by a single or by a small group of people, it would be desirable to have a gesture recognition system designed specifically for these users. In this paper we introduce a novel approach to face the problems of view-invariance and user personalization in the context of gesture interaction systems. More specifically, we propose a domain adaptation framework based on a feature space augmentation approach operating on robust view-invariant Self Similarity Matrix descriptors. To prove the effectiveness of our method a dataset corresponding to 17 users performing 10 different gestures under 3 point of views is collected and an extensive experimental evaluation is performed.
Date of Conference: 04-09 May 2014
Date Added to IEEE Xplore: 14 July 2014
Electronic ISBN:978-1-4799-2893-4

ISSN Information:

Conference Location: Florence, Italy

Contact IEEE to Subscribe

References

References is not available for this document.