Abstract:
Most of the facial expression recognition methods assume frontal or near-frontal head poses and usually their accuracy strongly decreases when tested with non-frontal pos...Show MoreMetadata
Abstract:
Most of the facial expression recognition methods assume frontal or near-frontal head poses and usually their accuracy strongly decreases when tested with non-frontal poses. Training a 2D pose-specific classifier for a large number of discrete poses can be time consuming due to the need of many samples per pose. On the other hand, 2D and 3D view-point independent approaches are usually not robust to very large head rotations. In this paper we transform the problem of facial expression recognition under large head rotations into a missing data classification problem. 3D data of the face are projected onto a head pose invariant 2D representation and in this projection the only difference between poses is due to self-occlusions with respect to the depth sensor's position. Once projected, the visible part of the face is split in overlapping patches which are input to independent local classifiers and a voting scheme gives the final output. Experimental results on common benchmarks show that our method can accurately recognize facial expressions in a much larger pan and tilt range than state-of-the-art approaches, obtaining comparable performance to the best existing systems working only in narrower ranges.
Published in: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)
Date of Conference: 04-08 May 2015
Date Added to IEEE Xplore: 23 July 2015
Electronic ISBN:978-1-4799-6026-2