Authors:
Ekanshi Agrawal
1
;
Jabez Christopher
1
and
Vasan Arunachalam
2
Affiliations:
1
Department of Computer Science and Information Systems, BITS Pilani, Hyderabad Campus, Telangana, India
;
2
Department of Civil Engineering, BITS Pilani, Hyderabad Campus, Telangana, India
Keyword(s):
Facial Expression Recognition, Emotion Classification, Periocular Region, Machine Learning.
Abstract:
Facial Expressions are a key part of human behavior, and a way to express oneself and communicate with others. Multiple groups of muscles, belonging to different parts of the face, work together to form an expression. It is quite possible that the emotions being expressed by the region around the eyes and that around the mouth, don’t seem to agree with each other, but may agree with the overall expression when the entire face is considered. In such a case, it would be inconsiderate to focus on a particular region of the face only. This study evaluates expressions in three regions of the face (eyes, mouth, and the entire face) and records the expression reported by the majority. The data consists of images labelled with intensities of Action Units in three regions – eyes, mouth, and the entire face – for eight expressions. Six classifiers are used to determine the expression in the images. Each classifier is trained on all three regions separately, and then tested to determine an emot
ion label separately for each of the three regions of a test image. The image is finally labelled with the emotion present in at least two (or majority) of the three regions. Average performance over five stratified train-test splits it taken. In this regard, the Gradient Boost Classifier performs the best with an average accuracy of 94%, followed closely by Random Forest Classifier at 92%. The results and findings of this study will prove helpful in current situations where faces are partially visible and/or certain parts of the face are not captured clearly.
(More)