skip to main content
10.1145/1322192.1322203acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder

Published: 12 November 2007 Publication History

Abstract

The emergence of novel affective technologies such as wearable interventions for individuals who have difficulties with social-emotional communication requires reliable, real-time processing of spontaneous expressions. This paper describes a novel wearable camera and a systematic methodology to elicit, capture and tag natural, yet experimentally controlled face videos in dyadic conversations. The MIT-Groden-Autism corpus is the first corpus of naturally-evoked facial expressions of individuals with and without Autism Spectrum Dis-orders (ASD), a growing population who have difficulties with social-emotion communication. It is also the largest in number and duration of the videos, and represents affective-cognitive states that extend beyond the basic emotions. We highlight the machine vision challenges inherent in processing such a corpus, including pose changes and pathological affective displays.

References

[1]
R. Adolphs. Social cognition and the human brain,. Trends in Cognitive Sciences, 3:469--479, 1999.
[2]
A. P. Association. Diagnostic and Statistical Manual of Mental Disorders, 4th Edition. Washington, DC: American Psychiatric Association, DSM-IV, 1994.
[3]
S. Baron-Cohen, O. Golan, S. Wheelwright, and J. J. Hill. Mind Reading: The Interactive Guide to Emotions. London: Jessica Kingsley Publishers, 2004.
[4]
S. Baron-Cohen, S. Wheelwright, J. Lawson, R. Griffin, and J. Hill. Handbook of Childhood Cognitive Development, chapter The Exact Mind: Empathising and Systemising in Autism Spectrum Conditions, pages 491--508. Oxford: Blackwell Publishers, 2002.
[5]
M. S. Bartlett, G. Littlewort, I. Fasel, and J. R. Movellan. Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction. In CVPR Workshop on Computer Vision and Pattern Recognition for Human-Computer Interaction, 2003.
[6]
I. Cohen, N. Sebe, L. S. Chen, A. Garg, and T. S. Huang. Facial Expression Recognition from Video Sequences: Temporal and Static Modeling. Computer Vision and Image Understanding (CVIU) special issue on Face recognition, 91(1--2):160--187, 2003.
[7]
J. F. Cohn, A. J. Zlochower, J. J. Lien, and T. Kanade. Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 129--138, 2004.
[8]
S. D'Mello, S. Craig, J. Sullins, and A. Graesser. Predicting Affective States expressed through an Emote-Aloud Procedure from AutoTutor's Mixed-Initiative Dialogue. International Journal of Artificial Intelligence in Education, 16(1):328, 2006.
[9]
P. Ekman and W. V. Friesen. Pictures of Facial Affect. Consulting Psychologists, 1976.
[10]
P. Ekman and W. V. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists, 1978.
[11]
R. El Kaliouby and P. Robinson. Generalization of a vision-based computational model of mind-reading. Proceedings of First International Conference on Affective Computing and Intelligent Interaction, pages 582--589, 2005.
[12]
B. Fasel and J. Luettin. Automatic Facial Expression Analysis: A Survey. Pattern Recognition, 36:259--275, 2003.
[13]
A. J. Fridlund. The Psychology of Facial Expression, chapter The New Ethology of Human Facial Expressions, pages 103--131. Newbury Park, CA: Sage, 1997.
[14]
N. H. Frijda, P. Kuipers, and E. ter Schure. Relations Among Emotion, Appraisal, and Emotional Action Readiness. Journal of Personality and Social Psychology, 57(2): 212--228, 1989.
[15]
C. Frith and U. Frith. How we predict what other people are going to do. Brain Res, 2006.
[16]
M. Gladwell. Blink: the power of thinking without thinking. New York: Little, Brown, 2003.
[17]
H. Gu and Q. Ji. Facial Event Classification with Task Oriented Dynamic Bayesian Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 870--875, 2004.
[18]
R. el Kaliouby. Mind-Reading Machines: Automated Inference of Complex Mental States. Phd thesis, University of Cambridge, Computer Laboratory, (2005).
[19]
R. el Kaliouby and P. Robinson. Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures. In 2004 IEEE Workshop on Real-Time Vision for Human-Computer Interaction at the 2004 IEEE CVPR Conference, 2004.
[20]
R. el Kaliouby and P. Robinson. The Emotional Hearing Aid: An Assistive Tool for Children with Asperger Syndrome. Universal Access in the Information Society: Design for a more inclusive world, 4(2):163--172, 2005.
[21]
M. Kamachi, V. Bruce, S. Mukaida, J. Gyoba, S. Yoshikawa, and S. Akamatsu. Dynamic Properties Influence the Perception of Facial Expressions. Perception, 30(7):875--887, 2001.
[22]
T. Kanade, J. Cohn, and Y.-L. Tian. Comprehensive Database for Facial Expression Analysis. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 46--53, 2000.
[23]
A. Kapoor, R. W. Picard, and Y. Ivanov. Probabilistic Combination of Multiple Modalities to Detect Interest. In Proceedings of the International Conference on Pattern Recognition (ICPR), volume 3, pages 969--972, 2004.
[24]
C. D. Kilts, G. Egan, D. A. Gideon, T. D. Ely, and J. M. Hoffman. Dissociable Neural Pathways Are Involved in the Recognition of Emotion in Static and Dynamic Facial Expressions. NeuroImage, 18(1):156--168, 2003.
[25]
A. Klin. The enactive mind, or from actions to cognition: lessons from autism. Philosophical Transactions of the Royal Society: Biological Sciences, 358(1430):345--360, 2003.
[26]
G. Littlewort, M. S. Bartlett, I. Fasel, J. Chenu, T. Kanda, H. Ishiguro, and J. R. Movellan. Towards Social Robots: Automatic Evaluation of Human-robot Interaction by Face Detection and Expression Classification. In S. Thrun and B. Schoelkopf, editors, Advances in Neural Information Processing Systems, volume 16, 2004.
[27]
G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. R. Movellan. Dynamics of Facial Expression Extracted Automatically from Video. In Face Processing in Video Workshop at the CVPR2004, 2004.
[28]
P. Michel and R. el Kaliouby. Real Time Facial Expression Recognition in Video using Support Vector Machines. In Proceedings of the IEEE International Conference on Multimodal Interfaces (ICMI), pages 258--264, 2003.
[29]
M. Pantic and L. J. Rothkrantz. Automatic Analysis of Facial Expressions: The State of the Art. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 22:1424--1445, 2000.
[30]
M. Pantic, M. Valstar, R. Rademaker, and L. Maat. Web-based Database for Facial Expression Analysis. In Proc. IEEE Int'l Conf. Multmedia and Expo (ICME'05), 2005.
[31]
M. Pardàs, A. Bonafonte, and J. L. Landabaso. Emotion Recognition based on MPEG4 Facial Animation Parameters. In Proceedings of International Conference on Acoustics, Speech and Signal Procssing, volume 4, pages 3624--3627, 2002.
[32]
R. W. Picard and J. Klein. Computers that Recognise and Respond to User Emotion: Theoretical and Practical Implications. Interacting with Computers, 14:141--169, 2002.
[33]
C. Rice, J. Baio, K. V. N. Braun, and N. Doernberg. Prevalence of the Autism Spectrum Disorders (ASDs) in Multiple Areas of the United States. In International Meeting for Autism Research, 2007.
[34]
K. Schmidt, Z. Ambadar, J. Cohn, and L. I. Reed. Movement differences between deliberate and spontaneous facial expressions: Zygomaticus major action in smiling. Journal of Nonverbal Behavior, 2006. (In Press).
[35]
N. Sebe, M. S. Lew, I. Cohen, Y. Sun, T. Gevers, and T. Huang. Authentic Facial Expression Analysis. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 517--523, 2004.
[36]
T. Singer. The neuronal basis and ontogeny of empathy and mind reading: Review of literature and implications for future research. Neurosci Biobehav Rev, 2006.
[37]
P. Smith, N. da Vitoria Lobo, and M. Shah. Resolving Hand Over Face Occlusion. LECTURE NOTES IN COMPUTER SCIENCE, 3766:160, 2005.
[38]
A. Teeters, R. E. Kaliouby, and R. Picard. Self-cam: feedback from what would be your social partner. In ACM SIGGRAPH 2006 Research posters, page 138, New York, NY, USA, 2006. ACM Press.
[39]
Y.-L. Tian, L. Brown, A. Hampapur, S. Pankanti, A. W. Senior, and R. M. Bolle. Real World Real-time Automatic Recognition of Facial Expressions. IEEE workshop on Performance Evaluation of Tracking and Surveillance, 2003.
[40]
Y.-L. Tian, T. Kanade, and J. Cohn. Eye-state Detection by Local Regional Information. In Proceedings of the International Conference on Multimedia Interfaces, pages 143--150, 2000.

Cited By

View all
  • (2023)Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection in Children with AutismUniversal Access in Human-Computer Interaction10.1007/978-3-031-35681-0_43(657-677)Online publication date: 9-Jul-2023
  • (2019)Natural-Spontaneous Affective-Cognitive Dataset for Adult Students With and Without Asperger SyndromeIEEE Access10.1109/ACCESS.2019.29219147(77990-77999)Online publication date: 2019
  • (2019)A Survey on Image Acquisition Protocols for Non-posed Facial Expression Recognition SystemsMultimedia Tools and Applications10.1007/s11042-019-7596-2Online publication date: 2-May-2019
  • Show More Cited By

Index Terms

  1. Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '07: Proceedings of the 9th international conference on Multimodal interfaces
    November 2007
    402 pages
    ISBN:9781595938176
    DOI:10.1145/1322192
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2007

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. affective computing
    2. autism spectrum disorder
    3. facial expressions
    4. spontaneous video corpus

    Qualifiers

    • Research-article

    Conference

    ICMI07
    Sponsor:
    ICMI07: International Conference on Multimodal Interface
    November 12 - 15, 2007
    Aichi, Nagoya, Japan

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 30 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection in Children with AutismUniversal Access in Human-Computer Interaction10.1007/978-3-031-35681-0_43(657-677)Online publication date: 9-Jul-2023
    • (2019)Natural-Spontaneous Affective-Cognitive Dataset for Adult Students With and Without Asperger SyndromeIEEE Access10.1109/ACCESS.2019.29219147(77990-77999)Online publication date: 2019
    • (2019)A Survey on Image Acquisition Protocols for Non-posed Facial Expression Recognition SystemsMultimedia Tools and Applications10.1007/s11042-019-7596-2Online publication date: 2-May-2019
    • (2013)Analyses of a Multimodal Spontaneous Facial Expression DatabaseIEEE Transactions on Affective Computing10.1109/T-AFFC.2012.324:1(34-46)Online publication date: 1-Jan-2013
    • (2011)Natural Affect Data: Collection and AnnotationNew Perspectives on Affect and Learning Technologies10.1007/978-1-4419-9625-1_5(55-70)Online publication date: 8-Jun-2011
    • (2009)A3ACM Transactions on Accessible Computing (TACCESS)10.1145/1530064.15300662:2(1-29)Online publication date: 1-Jun-2009

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media