Abstract:
Facial action units provide an objective characterization of facial muscle movements. Automatic estimation of facial action unit intensities is a challenging problem give...Show MoreMetadata
Abstract:
Facial action units provide an objective characterization of facial muscle movements. Automatic estimation of facial action unit intensities is a challenging problem given individual differences in neutral face appearances and the need to generalize across different pose, illumination and datasets. In this paper, we introduce the Local-Global Ranking method as a novel alternative to direct prediction of facial action unit intensities. Our method takes advantage of the additional information present in videos and image collections of the same person (e.g. a photo album). Instead of trying to estimate facial expression intensities independently for each image, our proposed method performs a two-stage ranking: a local pair-wise ranking followed by a global ranking. The local ranking is designed to be accurate and robust by making a simple 3-class comparison (higher, equal, or lower) between randomly sampled pairs of images. We use a Bayesian model to integrate all these pair-wise rankings and construct a global ranking. Our Local-Global Ranking method shows state-of-the-art performance on two publicly-available datasets. Our cross-dataset experiments also show better generalizability.
Published in: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII)
Date of Conference: 23-26 October 2017
Date Added to IEEE Xplore: 01 February 2018
ISBN Information:
Electronic ISSN: 2156-8111