Abstract:
Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. In this talk we introduce aut...Show MoreMetadata
Abstract:
Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. In this talk we introduce automatic affect recognition based on multimodal sensory data. In the first part we introduce learning paradigms, classifiers, and information fusion architectures for the challenging task of multimodal affect recognition, particularly the focus is on classifier fusion, semi-supervised learning, and the extraction of discriminative features from audio, video, and bio-signals. In this context we introduce our ATLAS framework, an efficient tool to analyze and annotate multimodal sensory data streams. The second part of this seminar is devoted to the presentation of case studies on different affective data sets (University Ulm Multimodal Affective Corpus UUlmAC, SenseEmotion) that have been recently carried out in our lab.
Date of Conference: 22-26 March 2021
Date Added to IEEE Xplore: 24 May 2021
ISBN Information: