As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In this paper, we present and discuss three empirical studies that we have conducted involving human subjects and human observers concerning the recognition of emotions from audio-lingual visual-facial and keyboard-evidence modalities. Many researchers agree that these modalities are complementary to each other and that their combination can improve the accuracy in affective user models. However, there is a shortage of research in empirical work concerning the strengths and weaknesses of each modality so that more accurate recognizers can be built. In our research, we have investigated the recognition of emotions with respect to 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. We have concluded that, in cases where a single modality may be deficient in providing emotion recognition evidence, the recognition process can be supported and complemented by the other modalities.