Loading [a11y]/accessibility-menu.js
Extending the interaction area for view-invariant 3D gesture recognition | IEEE Conference Publication | IEEE Xplore

Extending the interaction area for view-invariant 3D gesture recognition


Abstract:

This paper presents a non-intrusive approach for view-invariant hand gesture recognition. In fact, the representation of gestures changes dynamically depending on camera ...Show More

Abstract:

This paper presents a non-intrusive approach for view-invariant hand gesture recognition. In fact, the representation of gestures changes dynamically depending on camera viewpoints. Therefore, the different positions of the user between the training phase and the evaluation phase can severely compromise the recognition process. The proposed approach involves the calibration of two Microsoft Kinect depth cameras to allow the 3D modeling of the dynamic hands movements. The gestures are modeled as 3D trajectories and the classification is based on Hidden Markov Models. The approach is trained on data from one viewpoint and tested on data from other very different viewpoints with an angular variation of 180°. The average recognition rate is always higher than 94%. Since it is similar to the recognition rate when training and testing on gestures from the same viewpoint, hence the approach is indeed view-invariant. Comparing these results with those deriving from the test of a one depth camera approach demonstrates that the adoption of two calibrated cameras is crucial.
Date of Conference: 15-18 October 2012
Date Added to IEEE Xplore: 25 February 2013
ISBN Information:

ISSN Information:

Conference Location: Istanbul, Turkey

Contact IEEE to Subscribe

References

References is not available for this document.