Loading [a11y]/accessibility-menu.js
Methods and challenges for creating an emotional audio-visual database | IEEE Conference Publication | IEEE Xplore

Methods and challenges for creating an emotional audio-visual database


Abstract:

Emotion has a very important role in human communication and can be expressed either verbally through speech (e.g. pitch, intonation, prosody etc), or by facial expressio...Show More

Abstract:

Emotion has a very important role in human communication and can be expressed either verbally through speech (e.g. pitch, intonation, prosody etc), or by facial expressions, gestures etc. Most of the contemporary human-computer interaction are deficient in interpreting these information and hence suffers from lack of emotional intelligence. In other words, these systems are unable to identify human's emotional state and hence is not able to react properly. To overcome these inabilities, machines are required to be trained using annotated emotional data samples. Motivated from this fact, here we have attempted to collect and create an audio-visual emotional corpus. Audio-visual signals of multiple subjects were recorded when they were asked to watch either presentation (having background music) or emotional video clips. Post recording subjects were asked to express how they felt, and to read out sentences that appeared on the screen. Self annotation from the subject itself, as well as annotation from others have also been carried out to annotate the recorded data.
Date of Conference: 01-03 November 2017
Date Added to IEEE Xplore: 14 June 2018
ISBN Information:
Electronic ISSN: 2472-7695
Conference Location: Seoul, Korea (South)

References

References is not available for this document.