skip to main content
chapter

Multimodal-multisensor affect detection

Published: 01 October 2018 Publication History
First page of PDF

References

[1]
R. C. Baker and D. O. Guttfreund. 1993. The effects of written autobiographical recollection induction procedures on mood. Journal of Clinical Psychology 49:563--568. 180
[2]
T. Baltrušaitis, N. Banda, and P. Robinson. 2013. Dimensional Affect Recognition using Continuous Conditional Random Fields. In Proceedings of the International Conference on Multimedia and Expo (Workshop on Affective Analysis in Multimedia). 186
[3]
L. Barrett. 2006. Are emotions natural kinds? Perspectives on Psychological Science 1:28--58. 170
[4]
L. Barrett, B. Mesquita, K. Ochsner, and J. Gross. 2007. The experience of emotion. Annual Review of Psychology, 58:373--403. 170
[5]
L. F. Barrett. 2014. The conceptual act theory: A précis. Emotion Review, 6:292--297. 169
[6]
P. Barros, D. Jirak, C. Weber, and S. Wermter. 2015. Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Networks, 72:140--151. 186
[7]
N. Bosch, H. Chen, R. Baker, V. Shute, and S. K. D'Mello. 2015a. Accuracy vs. Availability Heuristic in Multimodal Affect Detection in the Wild. In Proceedings of the 17th ACM International Conference on Multimodal Interaction (ICMI 2015) ACM, New York. 183, 186
[8]
N. Bosch, S. K. D'Mello, R. Baker, J. Ocumpaugh, V. Shute, M. Ventura, and L. Wang. 2015b. Automatic Detection of Learning-Centered Affective States in the Wild. In Proceedings of the 2015 International Conference on Intelligent User Interfaces (IUI 2015) ACM, New York, pp. 379--388. 182
[9]
N. Bosch, S. D'Mello, R. Baker, J. Ocumpaugh, and V. Shute. 2016. Using video to automatically detect learner affect in computer-enabled classrooms. ACM Transactions on Interactive Intelligent Systems, 6:17.11--17.31. 183
[10]
M. M. Bradley and P. J. Lang. 1994 Measuring emotion: the self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25:49--59. 167
[11]
R. Calvo, S. K. D'Mello, J. Gratch, and A. Kappas. 2015. The Oxford Handbook of Affective Computing Oxford University Press, New York. 169
[12]
R. A. Calvo and S. K. D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1:18--37. 169
[13]
L. Camras and J. Shutter. 2010. Emotional facial expressions in infancy. Emotion Review, 2(2):120--129. 170
[14]
P. Cardinal, N. Dehak, A.L. Koerich, J. Alam, and P. Boucher. 2015. ETS system for AV+EC 2015 challenge. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 17--23. 189
[15]
G. Chanel, C. Rebetez, M. Bétrancourt, and T. Pun. 2011. Emotion assessment from physiological signals for adaptation of game difficulty. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 41:1052--1063. 186
[16]
L. Chao, J. Tao, M. Yang, Y. Li, and Z. Wen. 2015. Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 65--72. 189
[17]
D. Chen, D. Jiang, I. Ravyse, and H. Sahli. 2009. Audio-visual emotion recognition based on a DBN model with constrained asynchrony. In Proceedings of the Fifth International Conference on Image and Graphics (ICIG 09) IEEE, Washington, DC, pp. 912--916. 176
[18]
S. Chen and Q. Jin. 2015. Multi-modal dimensional emotion recognition using recurrent neural networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 49--56. 189
[19]
J. Coan and J. Allen. Handbook of emotion elicitation and assessment Oxford University Press, New York. 167, 187
[20]
J. A. Coan. 2010. Emergent ghosts of the emotion machine. Emotion Review, 2:274--285. 170, 172
[21]
C. Conati and H. Maclaren. 2009. Empirically building and evaluating a probabilistic model of user affect. User Modeling and User-Adapted Interaction, 19:267--303. 176
[22]
R. Cowie, G. McKeown, and E. Douglas-Cowie. 2012. Tracing emotion: an overview. International Journal of Synthetic Emotions (IJSE), 3:1--17. 169
[23]
S. D'Mello and R. Calvo. 2013. Beyond the Basic Emotions: What Should Affective Computing Compute? In S. Brewster, S. Bødker and W. Mackay editors, Extended Abstracts of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2013), ACM, New York. 187
[24]
S. K. D'Mello. 2016. On the influence of an iterative affect annotation approach on inter-observer and self-observer reliability. IEEE Transactions on Affective Computing, 7:136--149. 172
[25]
S. K. D'Mello and J. Kory. 2015. A review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys, 47:43:41--43:46. 169, 185, 186, 187, 189
[26]
S. D'Mello and A. Graesser. 2011. The half-life of cognitive-affective states during complex learning. Cognition & Emotion, 25:1299--1308. 176
[27]
S. K. D'Mello, N. Dowell, and A. C. Graesser. 2013. Unimodal and multimodal human perception of naturalistic non-basic affective states during Human-Computer interactions. IEEE Transactions on Affective Computing, 4:452--465. 172
[28]
A. Damasio. 2003. Looking for Spinoza: Joy, sorrow, and the feeling brain. Harcourt Inc., Orlando, FL. 172
[29]
C. Darwin. 1872. The expression of the emotions in man and animals. John Murray, London. 172
[30]
D. Datcu and L. Rothkrantz. 2011. Emotion recognition using bimodal data fusion. In Proceedings of the 12th International Conference on Computer Systems and Technologies ACM, New York, pp. 122--128. 186
[31]
S. Dobrišek, R. Gajšek, F. Mihelič, N. Pavešić, and V. Štruc. 2013. Towards Efficient MultiModal Emotion Recognition. International Journal of Advanced Robotic Systems, 10:1--10. 186
[32]
P. Ekman. 1992. An argument for basic emotions. Cognition & Emotion, 6:169--200. 170, 172, 187
[33]
P. Ekman. 1994. Strong Evidence for Universals in Facial Expressions - a Reply to Russells Mistaken Critique. Psychological Bulletin, 115:268--287. 170
[34]
H. Elfenbein and N. Ambady. 2002a. Is there an ingroup advantage in emotion recognition? Psychological Bulletin, 128:243--249. 170
[35]
H. Elfenbein and N. Ambady. 2002b. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128:203--235. 170
[36]
F. Eyben, M. Wöllmer, A. Graves, B. Schuller, E. Douglas-Cowie, and R. Cowie. 2010. On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues. Journal on Multimodal User Interfaces, 3:7--19. 178
[37]
FACET. 2014. Facial Expression Recognition Software Emotient, Boston, MA. 182
[38]
J. Fontaine, K. Scherer, E. Roesch, and P. Ellsworth. 2007. The world of emotions is not two-dimensional. Psychological Science, 18. 170, 187
[39]
A. J. Fridlund, P. Ekman, and H. Oster. 1987. Facial expressions of emotion. In A. W. Siegman and S. Feldstein, editors, Nonverbal behavior and communication, pp. 143--223. Erlbaum, Hillsdale, NJ. 170
[40]
J. M. Girard, J. F. Cohn, L. A. Jeni, M. A. Sayette, and F. De la Torre. 2015. Spontaneous facial expression in unscripted social interactions can be measured automatically. Behavior Research Methods, 47:1136--1147. 167, 168
[41]
M. Glodek, S. Reuter, M. Schels, K. Dietmayer, and F. Schwenker. 2013. Kalman Filter Based Classifier Fusion for Affective State Recognition. In Z.-H. Zhou, F. Roli and J. Kittler, editors, Proceedings of the 11th International Workshop on Multiple Classifier Systems, Springer, Berlin Heidelberg, pp. 85--94. 186
[42]
J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe, and J. C. Lester. 2014. Predicting learning and affect from multimodal data streams in task-oriented tutorial dialogue. In J. Stamper, Z. Pardos, M. Mavrikis and B. M. McLaren, editors, Proceedings of the 7th International Conference on Educational Data Mining, International Educational Data Mining Society, pp. 122--129. 186
[43]
J. J. Gross and L. F. Barrett. 2011. Emotion generation and emotion regulation: One or two depends on your point of view. Emotion Review, 3:8--16. 170
[44]
L. He, D. Jiang, L. Yang, E. Pei, P. Wu, and H. Sahli. 2015. Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 73--80. 189
[45]
S. J. Heine, D. R. Lehman, K. Peng, and J. Greenholtz. 2002. What's wrong with cross-cultural comparisons of subjective Likert scales?: The reference-group effect. Journal of Personality and Social Psychology, 82:903--918. 171
[46]
S. Hochreiter and J. Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735--1780. 178
[47]
S. Hommel, A. Rabie and U. Handmann. 2013. Attention and Emotion Based Adaption of Dialog Systems. In E. Pap editor, Intelligent Systems: Models and Applications, pp. 215--235. Springer Verlag, Berlin Heidelberg. 186
[48]
Z. Huang, T. Dang, N. Cummins, B. Stasak, P. Le, V. Sethu, and J. Epps. 2015. An investigation of annotation delay compensation and output-associative fusion for multimodal continuous emotion prediction. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 41--48. 189
[49]
M. Hussain, H. Monkaresi, and R. Calvo. 2012. Combining Classifiers in Multimodal Affect Detection. In Proceedings of the Australasian Data Mining Conference. 186
[50]
C. Izard. Innate and universal facial expressions: Evidence from developmental and cross-cultural research. Psychological Bulletin 115. 170
[51]
C. Izard. 2010. The many meanings/aspects of emotion: Definitions, functions, activation, and regulation. Emotion Review, 2:363--370. 169, 170
[52]
C. E. Izard. 2007. Basic emotions, natural kinds, emotion schemas, and a new paradigm. Perspectives on Psychological Science, 2:260--280. 169, 172
[53]
W. James. 1884. What is an emotion? Mind, 9:188--205. 172
[54]
J. H. Janssen, P. Tacken, J. de Vries, E. L. van den Broek, J. H. Westerink, P. Haselager, and W. A. IJsselsteijn. 2013. Machines outperform laypersons in recognizing emotions elicited by autobiographical recollection. Human-Computer Interaction, 28:479--517. 180
[55]
D. Jiang, Y. Cui, X. Zhang, P. Fan, I. Ganzalez, and H. Sahli. 2011. Audio visual emotion recognition based on triple-stream dynamic bayesian network models. In S. D'Mello, A. Graesser, S. B and J. Martin, editors, Proceedings of the Fourth International Conference on Affective Computing and Intelligent Interaction, Springer-Verlag, Berlin Heidelberg, pp. 609--618. 176, 186
[56]
M. Kächele, P. Thiam, G. Palm, F. Schwenker, and M. Schels. 2015. Ensemble methods for continuous affect recognition: Multi-modality, temporality, and challenges. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 9--16. 189
[57]
S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, Ç. Gülçehre, R. Memisevic, P. Vincent, A. Courville, Y. Bengio, and R. C. Ferrari. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM International Conference on Multimodal Interaction ACM, New York, pp. 543--550. 179
[58]
S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. 2012. Deap: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing 3:18--31. 186
[59]
J. Kory and S. K. D'Mello. 2015. Affect elicitation for affective computing. In R. Calvo, S. D'Mello, J. Gratch and A. Kappas, editors, The Oxford Handbook of Affective Computing, pp. 371--383. Oxford University Press, New York. 171, 182
[60]
J. Kory, S. K. D'Mello, and A. Olney. 2015. Motion Tracker: Camera-based Monitoring of Bodily Movements using Motion Silhouettes. Plos One 10,
[61]
G. Krell, M. Glodek, A. Panning, I. Siegert, B. Michaelis, A. Wendemuth, and F. Schwenker. 2013. Fusion of Fragmentary Classifier Decisions for Affective State Recognition. In F. Schwenker, S. Scherer and L.-P. Morency, editors, Proceedings of the The 1st International Workshop on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction, Springer-Verlag, Berlin Heidelberg, pp. 116--130. 186
[62]
J. A. Krosnick. 1999. Survey research. Annual Review of Psychology, 50:537--567. 171
[63]
Y. Le Cun, Y. Bengio, and G. E. Hinton. 2015. Deep learning. Nature, 521:436--444. 178
[64]
H. C. Lench, S. W. Bench, and S. A. Flores. 2013. Searching for evidence, not a war: Reply to Lindquist, Siegel, Quigley, and Barrett (2013). Psychological Bulletin, 113:264--268. . 169
[65]
J. S. Lerner and D. Keltner. 2000. Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition & Emotion, 14:473--493. 169
[66]
M. D. Lewis. 2005. Bridging emotion theory and neurobiology through dynamic systems modeling. Behavioral and Brain Sciences, 28:169--245. 169, 172
[67]
X. Li and Q. Ji. 2005. Active affective state detection and user assistance with dynamic bayesian networks. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 35:93--105. 176
[68]
J. Lin, C. Wu, and W. Wei. 2012. Error Weighted Semi-Coupled Hidden Markov Model for Audio-Visual Emotion Recognition. IEEE Transactions on Multimedia, 14:142--156. 177, 186
[69]
K. A. Lindquist, A. B. Satpute, T. D. Wager, J. Weber, and L. F. Barrett. 2016. The brain basis of positive and negative affect: evidence from a meta-analysis of the human neuroimaging literature. Cerebral Cortex, 26:1910--1922. 172
[70]
K. A. Lindquist, E. H. Siegel, K. S. Quigley and L. F. Barrett. 2013. The Hundred-Year Emotion War: Are Emotions Natural Kinds or Psychological Constructions? Comment on Lench, Flores, and Bench (2011). Psychological Bulletin, 139:264--268. 169
[71]
K. A. Lindquist, T. Wager, D. H. Kober, E. Bliss-Moreau, and L. F. Barrett. 2011. The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 173:1--86. 172
[72]
F. Lingenfelser, J. Wagner and, E. André. 2011. A systematic discussion of fusion techniques for multi-modal affect recognition tasks. In Proceedings of the 13th International Conference on Multimodal Interfaces ACM, New York, pp. 19--26. 186
[73]
M. Liu, R. Wang, S. Li, S. Shan, Z. Huang, and X. Chen. 2014. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th ACM International Conference on Multimodal Interaction ACM, New York, pp. 494--501. 176
[74]
G. Loewenstein and J. S. Lerner. 2003. The role of affect in decision making. In Handbook of Affective Science, 619:3. 169
[75]
K. Lu and Y. Jia. 2012. Audio-visual emotion recognition with boosted coupled HMM. In Proceedings of the 21st International Conference on Pattern Recognition IEEE, Washington, DC, pp. 1148--1151. 177, 186
[76]
J.-C. Martin, C. Clavel, M. Courgeon, M. Ammi, M.-A. Amorim, Y. Tsalamlal, and Y. Gaffary. 2018. How Do Users Perceive Multimodal Expressions of Affects? In In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition. Morgan & Claypool Publishers, San Rafael, CA.
[77]
G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. 2012. The SEMAINE database: Annotated multimodal records of emotionally coloured conversations between a person and a limited agent. IEEE Transactions on Affective Computing, 3:5--17. 188
[78]
M. Mehu and K. Scherer. 2012. A psycho-ethological approach to social signal processing. Cognitive Processing, 13:397--414. 171
[79]
B. Mesquita and M. Boiger. 2014. Emotions in context: A sociodynamic model of emotions. Emotion Review, 6:298--302. 169
[80]
A. Metallinou, M. Wollmer, A. Katsamanis, F. Eyben, B. Schuller, and S. Narayanan. 2012. Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification. IEEE Transactions on Affective Computing, 3:184--198. 186
[81]
A. Milchevski, A. Rozza, and D. Taskovski. 2015. Multimodal affective analysis combining regularized linear regression and boosted regression trees. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, 33--39. 189
[82]
H. Monkaresi, N. Bosch, R. A. Calvo, and S. K. D'Mello. 2017. Automated detection of engagement using video-based estimation of facial expressions and heart rate. IEEE Transactions on Affective Computing, 8:15--28. 186
[83]
H. Monkaresi, M. S. Hussain and R. Calvo. 2012. Classification of affects using head movement, skin color features and physiological signals. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics IEEE, Washington, DC, pp. 2664--2669. 186
[84]
H.-W. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler. 2015. Deep learning for emotion recognition on small datasets using transfer learning. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI 2015) ACM, New York, pp. 443--449. 179
[85]
M. Nicolaou, H. Gunes, and M. Pantic. 2011. Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence& Arousal Space. IEEE Transactions on Affective Computing, 2:92--105. 186
[86]
J. Ocumpaugh, R. S. Baker, and M. M. T. Rodrigo. 2012. Baker-Rodrigo Observation Method Protocol (BROMP) 1.0. Training Manual Version 1.0. Worcester Polytechnic Institute, Teachers College Columbia University, & Ateneo de Manila University, New York and Manila, Philippines. 182
[87]
J. Ocumpaugh, R. S. Baker, and M. M. T. Rodrigo. 2015. Baker Rodrigo Ocumpaugh Monitoring Protocol (BROMP) 2.0 Technical and Training Manual Teachers College, Columbia University, and Ateneo Laboratory for the Learning Sciences, New York, and Manila, Philippines. 167
[88]
J. Park, G. Jang, and Y. Seo. 2012. Music-aided affective interaction between human and service robot. EURASIP Journal on Audio, Speech, and Music Processing 2012, 1--13. 186
[89]
B. Parkinson, A. H. Fischer, and A. S. Manstead. Emotion in social relations: Cultural, group, and interpersonal processes. Psychology Press. 170
[90]
R. Picard. 1997. Affective Computing. MIT Press, Cambridge, MA. 169, 189
[91]
R. Picard. 2010. Affective Computing: From Laughter to IEEE. IEEE Transactions on Affective Computing, 1:11--17. 189
[92]
R. W. Picard, S. Fedor, and Y. Ayzenberg. 2015. Multiple arousal theory and daily-life electrodermal activity asymmetry. Emotion Review, 8 (1), 62--75. 174
[93]
P. M. Podsakoff, S. B. MacKenzie, J. Y. Lee, and N. P. Podsakoff. 2003. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88:879--903. 171
[94]
F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller. 2015a. Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognition Letters, 66:22--30. 178, 183
[95]
F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic. 2015b. AV+ EC 2015: The First Affect Recognition Challenge Bridging Across Audio, Video, and Physiological Data. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 3--8. 184
[96]
F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In Proceedings of the 2nd International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE) in conjunction with the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition IEEE, Washington, DC. 184, 189
[97]
V. Rosas, R. Mihalcea and L. Morency. 2013. Multimodal Sentiment Analysis of Spanish Online Videos. IEEE Intelligent Systems, 28:38--45. 186
[98]
I. J. Roseman. 2011. Emotional behaviors, emotivational goals, emotion strategies: Multiple levels of organization integrate variable and consistent responses. Emotion Review, 3:434--443. 170, 171
[99]
R. Rosenthal and R. Rosnow. 1984. Essentials of behavioral research: Methods and data analysis. McGraw-Hill, New York. 190
[100]
V. Rozgic, S. Ananthakrishnan, S. Saleem, R. Kumar, and R. Prasad. 2012. Ensemble of SVM trees for multimodal emotion recognition. In Proceedings of the Signal & Information Processing Association Annual Summit and Conference IEEE, Washington, DC, pp. 1--4. 186
[101]
W. Ruch. 1995. Will the real relationship between facial expression and affective experience please stand up: The case of exhilaration. Cognition & Emotion, 9:33--58. 170
[102]
J. Russell. 2003. Core affect and the psychological construction of emotion. Psychological Review, 110:145--172. 169, 170
[103]
J. A. Russell, J. A. Bachorowski, and J. M. Fernandez-Dols. 2003. Facial and vocal expressions of emotion. Annual Review of Psychology 54, 329--349.
[104]
J. A. Russell, A. Weiss, and G. A. Mendelsohn. 1989. Affect Grid - A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57:493--502. 167
[105]
A. Savran, H. Cao, M. Shah, A. Nenkova, and R. Verma. 2012. Combining video, audio and lexical indicators of affect in spontaneous conversation via particle filtering. In Proceedings of the 14th ACM International Conference on Multimodal Interaction ACM, New York, pp. 485--492. 186
[106]
K. R. Scherer. 2009. The dynamic architecture of emotion: Evidence for the component process model. Cognition & Emotion, 23:1307--1351. 169
[107]
B. Schuller. 2011. Recognizing Affect from Linguistic Information in 3D Continuous Space. IEEE Transactions on Affective Computing, 2:192--205. 186, 188
[108]
B. Schuller, M. Valster, R. Cowie, and M. Pantic. 2011. AVEC 2011: Audio/Visual Emotion Challenge and Workshop. In S. D'Mello, A. Graesser, B. Schuller and J.-C. Martin, editors, Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction (ACII 2011), Springer, Berlin.
[109]
B. Schuller, M. Valster, F. Eyben, R. Cowie, and M. Pantic. 2012. AVEC 2012: The continuous audio/visual emotion challenge. In Proceedings of the 14th ACM international conference on Multimodal interaction ACM, New York, pp. 449--456. 188
[110]
V. J. Shute, M. Ventura, and Y. J. Kim. 2013. Assessment and learning of qualitative physics in Newton's playground. The Journal of Educational Research, 106:423--430. 182
[111]
M. Soleymani, S. Asghari-Esfeden, M. Pantic, and Y. Fu. 2014. Continuous emotion detection using eeg signals and facial expressions. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME) IEEE, Washington DC, pp. 1--6. 186
[112]
M. Soleymani, M. Pantic, and T. Pun. 2012. Multi-Modal Emotion Recognition in Response to Videos. IEEE Transactions on Affective Computing, 3:211--223. 186
[113]
S.S. Tomkins. 1962. Affect Imagery Consciousness: Volume I, The Positive Affects. Tavistock, London. 172
[114]
J. L. Tracy. 2014. An evolutionary approach to understanding distinct emotions. Emotion Review, 6:308--312. 170
[115]
A. Vinciarelli and A. Esposito. 2018. Multimodal Analysis of Social Signals. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition. Morgan & Claypool Publishers, San Rafael, CA.
[116]
H. Vu, Y. Yamazaki, F. Dong, and K. Hirota. 2011. Emotion recognition based on human gesture and speech information using RT middleware. In IEEE International Conference on Fuzzy Systems IEEE, Washington, DC, pp. 787--791. 186
[117]
J. Wagner and E. André. 2018. Real-time sensing of affect and social signals in a multimodal context. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition. Morgan & Claypool Publishers, San Rafael, CA.
[118]
J. Wagner, E. Andre, F. Lingenfelser, J. Kim, and T. Vogt. 2011. Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data. IEEE Transactions on Affective Computing, 2:206--218. 186
[119]
S. Walter, S. Scherer, M. Schels, M. Glodek, D. Hrabal, M. Schmidt, R. Böck, K. Limbrecht, H. Traue, and F. Schwenker. 2011. Multimodal emotion classification in naturalistic user behavior. In J. Jacko, editor, Proceedings of the International Conference on Human-Computer Interaction. Springer, Berlin, pp. 603--611. 186
[120]
S. Wang, Y. Zhu, G. Wu, and Q. Ji. 2013. Hybrid video emotional tagging using users' EEG and video content. Multimedia Tools and Applications, 1--27. 186
[121]
J. R. Williamson, T. F. Quatieri, B. S. Helfer, G. Ciccarelli, and D. D. Mehta. 2014. Vocal and Facial Biomarkers of Depression Based on Motor Incoordination and Timing. In Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge ACM, New York, pp. 65--72. 186
[122]
M. Wöllmer, M. Kaiser, F. Eyben, and B. Schuller. 2013a. LSTM modeling of continuous emotions in an audiovisual affect recognition framework. Image and Vision Computing, 31. 186
[123]
M. Wöollmer, F. Weninger, T. Knaup, B. Schuller, C. Sun, K. Sagae, and L. Morency. 2013b. YouTube Movie Reviews:Sentiment Analysis in an Audiovisual Context. IEEE Intelligent Systems, 28:46--53. 186
[124]
C. Wu and W. Liang. 2011. Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels. IEEE Transactions on Affective Computing, 2:10--21. 186
[125]
Z. Zeng, M. Pantic, G. Roisman, and T. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:39--58. 169, 527
[126]
D. Zhou, J. Luo, V. M. Silenzio, Y. Zhou, J. Hu, G. Currier, and H. A. Kautz. 2015. Tackling Mental Health by Integrating Unobtrusive Multimodal Sensing. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI-2015) ACM, New York, pp. 1401--1409. 186

Cited By

View all
  • (2024)A Review on Automated Facial Expression Recognition Using Image Emotion Analysis2024 Parul International Conference on Engineering and Technology (PICET)10.1109/PICET60765.2024.10716066(1-7)Online publication date: 3-May-2024
  • (2024)Affect Behavior Prediction: Using Transformers and Timing Information to Make Early Predictions of Student Exercise OutcomeArtificial Intelligence in Education10.1007/978-3-031-64299-9_14(194-208)Online publication date: 2-Jul-2024
  • (2023)Dyadic Affect in Parent-Child Multimodal Interaction: Introducing the DAMI-P2C Dataset and its Preliminary AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2022.317868914:4(3345-3361)Online publication date: 1-Oct-2023
  • Show More Cited By
  1. Multimodal-multisensor affect detection

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Books
    The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 2
    October 2018
    2034 pages
    ISBN:9781970001716
    DOI:10.1145/3107990

    Publisher

    Association for Computing Machinery and Morgan & Claypool

    Publication History

    Published: 01 October 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Chapter

    Appears in

    ACM Books

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Review on Automated Facial Expression Recognition Using Image Emotion Analysis2024 Parul International Conference on Engineering and Technology (PICET)10.1109/PICET60765.2024.10716066(1-7)Online publication date: 3-May-2024
    • (2024)Affect Behavior Prediction: Using Transformers and Timing Information to Make Early Predictions of Student Exercise OutcomeArtificial Intelligence in Education10.1007/978-3-031-64299-9_14(194-208)Online publication date: 2-Jul-2024
    • (2023)Dyadic Affect in Parent-Child Multimodal Interaction: Introducing the DAMI-P2C Dataset and its Preliminary AnalysisIEEE Transactions on Affective Computing10.1109/TAFFC.2022.317868914:4(3345-3361)Online publication date: 1-Oct-2023
    • (2022)Recognition and Classification of Facial Expressions using Artificial Neural Networks2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA)10.1109/HORA55278.2022.9800021(1-8)Online publication date: 9-Jun-2022
    • (2022)Recognition and Classification of Facial Expressions Using Artificial Neural NetworksProceedings of Third Doctoral Symposium on Computational Intelligence10.1007/978-981-19-3148-2_20(229-246)Online publication date: 10-Nov-2022
    • (2021)Emotion Recognition From Multiple Modalities: Fundamentals and methodologiesIEEE Signal Processing Magazine10.1109/MSP.2021.310689538:6(59-73)Online publication date: Nov-2021
    • (2020)Utilizing Multimodal Data Through fsQCA to Explain Engagement in Adaptive LearningIEEE Transactions on Learning Technologies10.1109/TLT.2020.302049913:4(689-703)Online publication date: 1-Oct-2020
    • (2019)Time to ScaleProceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300726(1-14)Online publication date: 2-May-2019
    • (2019)Building pipelines for educational data using AI and multimodal analytics: A “grey‐box” approachBritish Journal of Educational Technology10.1111/bjet.1285450:6(3004-3031)Online publication date: 21-Jul-2019

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media