loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Ekanshi Agrawal 1 ; Jabez Christopher 1 and Vasan Arunachalam 2

Affiliations: 1 Department of Computer Science and Information Systems, BITS Pilani, Hyderabad Campus, Telangana, India ; 2 Department of Civil Engineering, BITS Pilani, Hyderabad Campus, Telangana, India

Keyword(s): Facial Expression Recognition, Emotion Classification, Periocular Region, Machine Learning.

Abstract: Facial Expressions are a key part of human behavior, and a way to express oneself and communicate with others. Multiple groups of muscles, belonging to different parts of the face, work together to form an expression. It is quite possible that the emotions being expressed by the region around the eyes and that around the mouth, don’t seem to agree with each other, but may agree with the overall expression when the entire face is considered. In such a case, it would be inconsiderate to focus on a particular region of the face only. This study evaluates expressions in three regions of the face (eyes, mouth, and the entire face) and records the expression reported by the majority. The data consists of images labelled with intensities of Action Units in three regions – eyes, mouth, and the entire face – for eight expressions. Six classifiers are used to determine the expression in the images. Each classifier is trained on all three regions separately, and then tested to determine an emot ion label separately for each of the three regions of a test image. The image is finally labelled with the emotion present in at least two (or majority) of the three regions. Average performance over five stratified train-test splits it taken. In this regard, the Gradient Boost Classifier performs the best with an average accuracy of 94%, followed closely by Random Forest Classifier at 92%. The results and findings of this study will prove helpful in current situations where faces are partially visible and/or certain parts of the face are not captured clearly. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.137.171.121

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Agrawal, E.; Christopher, J. and Arunachalam, V. (2021). Emotion Recognition through Voting on Expressions in Multiple Facial Regions. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART; ISBN 978-989-758-484-8; ISSN 2184-433X, SciTePress, pages 1038-1045. DOI: 10.5220/0010306810381045

@conference{icaart21,
author={Ekanshi Agrawal. and Jabez Christopher. and Vasan Arunachalam.},
title={Emotion Recognition through Voting on Expressions in Multiple Facial Regions},
booktitle={Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
year={2021},
pages={1038-1045},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010306810381045},
isbn={978-989-758-484-8},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART
TI - Emotion Recognition through Voting on Expressions in Multiple Facial Regions
SN - 978-989-758-484-8
IS - 2184-433X
AU - Agrawal, E.
AU - Christopher, J.
AU - Arunachalam, V.
PY - 2021
SP - 1038
EP - 1045
DO - 10.5220/0010306810381045
PB - SciTePress