Loading [a11y]/accessibility-menu.js
Synthesising facial emotions | IEEE Conference Publication | IEEE Xplore

Abstract:

We present two approaches for the generation of novel video textures, which portray a human expressing different emotions. Here training data is provided by video sequenc...Show More

Abstract:

We present two approaches for the generation of novel video textures, which portray a human expressing different emotions. Here training data is provided by video sequences of an actress expressing specific emotions such as angry, happy and sad. The main challenge of modelling these video texture sequences is the high variance in head position and facial expression. Principal components analysis (PCA) is used to generate so called 'motion signatures', which are shown to be complex and have nonGaussian distributions. The first method uses a combined appearance model to transform the video data into a lower dimensional Gaussian space. This can then be modelled using a standard autoregressive process. The second technique presented extracts subsamples from the original data using short temporal windows, some of which have Gaussian distributions and can be modelled by an autoregressive process (ARP). We find that the combined appearance technique produces more aesthetically pleasing clips but does not maintain the motion characteristics as well as the temporal window approach
Date of Conference: 08-10 June 2004
Date Added to IEEE Xplore: 19 July 2004
Print ISBN:0-7695-2137-1
Conference Location: Bournemouth, UK

References

References is not available for this document.