No abstract available.
Proceeding Downloads
Detecting head movements in video-recorded dyadic conversations
This paper is about the automatic recognition of head movements in videos of face-to-face dyadic conversations. We present an approach where recognition of head movements is casted as a multimodal frame classification problem based on visual and ...
Estimating interviewee's willingness in multimodal human robot interview interaction
This study presents a prediction model of a speaker's willingness level in human-robot interview interaction by using multimodal features (i.e., verbal, audio, and visual). We collected a novel multimodal interaction corpus, including two types of ...
Sarcasm detection on Facebook: a supervised learning approach
Sarcasm is a common feature of user interaction on social networking sites. Sarcasm differs with typical communication in alignment of literal meaning with intended meaning. Humans can recognize sarcasm from sufficient context information including from ...
Constructionist steps towards an autonomously empathetic system
Prior efforts to create an autonomous computer system capable of predicting what a human being is thinking or feeling from facial expression data have been largely based on outdated, inaccurate models of how emotions work that rely on many ...
Real-time stress assessment through PPG sensor for VR biofeedback
Existing stress measurement methods, including cortisol measurement, blood pressure monitoring, and psychometric testing, are invasive, impractical, or intermittent, limiting both clinical and biofeedback utility. Better stress measurement methods are ...
Multimodal prediction of the audience's impression in political debates
Debates are popular among politicians, journalists and scholars because they are a useful way to foster discussion and argumentation about relevant matters. In these discussions, people try to give a good impression (the immediate effect produced in the ...
Distinction of stress and non-stress tasks using facial action units
Long-exposure to stress is known to lead to physical and mental health problems. But how can we as individuals track and monitor our stress? Wearables which measure heart variability have been studied to detect stress. Such devices, however, need to be ...
Effects of face and voice deformation on participant emotion in video-mediated communication
This research investigates the effectiveness of speech audio and facial image deformation tools that make conversation participants appear more positive. By conducting an experiment, we revealed that participants' feelings became more positive when ...
Investigating the generalizability of EEG-based cognitive load estimation across visualizations
We examine if EEG-based cognitive load (CL) estimation is generalizable across the character, spatial pattern, bar graph and pie chart-based visualizations for the n-back task. CL is estimated via two recent approaches: (a) Deep convolutional neural ...
Using virtual reality to control swarms of autonomous agents
Current two dimensional methods of controlling large numbers of small unmanned aerial systems (sUAS) have limitations such as a human's ability to track, efficiently control, and keep situational awareness on large numbers of sUASs. The 2017 DARPA-...
Investigating the dimensions of conversational agents' social competence using objective neurophysiological measurements
Assessing the social competence of anthropomorphic artificial agents developed to produce engaging social interactions with humans has become of primary importance to effectively compare various appearances and/or behaviours. Here we attempt to ...