skip to main content
10.1145/3389189.3389190acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Emotion expression in a socially assistive robot for persons with Parkinson's disease

Published:30 June 2020Publication History

ABSTRACT

Emotions are crucial for human social interactions and thus people communicate emotions through a variety of modalities: kinesthetic (through facial expressions, body posture and gestures), auditory (the acoustic features of speech) and semantic (the content of what they say). Sometimes however, communication channels for certain modalities can be unavailable (e.g., in the case of texting), and sometimes they can be compromised, due to a disorder such as Parkinson's disease (PD) that may affect facial, gestural and speech expressions of emotions. To address this, we developed a prototype for an emoting robot that can detect emotions in one modality, specifically in the content of speech, and then express them in another modality, specifically through gestures.

The system consists of two components: detection and expression of emotions. In this paper we present the development of the expression component of the emoting system. We focus on its dynamical properties that use a spring model for smooth transitions between emotion expressions over time. This novel method compensates for varying utterance frequency and prediction errors coming from the emotion recognition component. We also describe the input the dynamical expression component receives from the emotion detection component, the development and validation of the output comprising of the gestures instantiated in the robot, and the implementation of the system. We present results from a human validation study that shows people perceive the robot gestures, generated by the system, as expressing the emotions in the speech content. Also, we show that people's perceptions of the accuracy of emotion expression is significantly higher for a mass-spring dynamical system than a system without a mass-spring when specific detection errors are present. We discuss and suggest future developments of the system and further validation experiments.

This paper is part of a larger project to develop a prototype for a socially assistive robot for PD persons. The goal is to present the technical implementation of one robot capability: emotion expression.

Skip Supplemental Material Section

Supplemental Material

a7-valenti.mp4

mp4

24.3 MB

References

  1. 2015. Four Teams Win IARPA's ASpIRE Challenge. https://www.afcea.org/content/?q=Blog-four-teams-win-iarpas-aspire-challenge. Accessed: 2019-11-12.Google ScholarGoogle Scholar
  2. Kenji Amaya, Armin Bruderlin, and Tom Calvert. 2000. Emotion from Motion. Proc. Graphics Interface Conf (12 2000).Google ScholarGoogle Scholar
  3. E. Cambria. 2016. Affective Computing and Sentiment Analysis. IEEE Intelligent Systems 31, 2 (Mar 2016), 102--107. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Pauline Chevalier, Brice Isableu, Jean-Claude Martin, and Adriana Tapus. 2016. Individuals with autism: Analysis of the first interaction with nao robot based on their proprioceptive and kinematic profiles. In Advances in robot design and intelligent control. Springer, 225--233.Google ScholarGoogle Scholar
  5. Ze-Jing Chuang and Chung-Hsien Wu. 2004. Multi-modal emotion recognition from speech and text. International Journal of Computational Linguistics & Chinese Language Processing, Volume 9, Number 2, August 2004: Special Issue on New Trends of Speech and Language Processing 9, 2 (2004), 45--62.Google ScholarGoogle Scholar
  6. B. de Gelder, A.W. de Borst, and R. Watson. [n.d.]. The Perception of Emotion in Body Expressions. Wiley Interdisciplinary Reviews: Cognitive Science 6, 2 ([n. d.]), 149--158. Google ScholarGoogle ScholarCross RefCross Ref
  7. P. Ravindra De Silva and Nadia Bianchi-Berthouze. 2004. Modeling human affective postures: an information theoretic characterization of posture features. Computer Animation and Virtual Worlds 15, 3--4 (2004), 269--276. Google ScholarGoogle ScholarCross RefCross Ref
  8. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3--4 (1992), 169--200. Google ScholarGoogle ScholarCross RefCross Ref
  9. Sarah Fdili Alaoui, Cyrille Henry, and Christian Jacquemin. 2014. Physical modelling for interactive installations and the performing arts. International Journal of Performance Arts and Digital Media 10, 2 (2014), 159--178.Google ScholarGoogle ScholarCross RefCross Ref
  10. Huanghao Feng, Anibal Gutierrez, Jun Zhang, and Mohammad H Mahoor. 2013. Can NAO robot improve eye-gaze attention of children with high functioning autism?. In 2013 IEEE International Conference on Healthcare Informatics. IEEE, 484--484.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. The Apache Software Foundation. 2019. Ordinary Differential Equations Integration. https://commons.apache.org/proper/commons-math/userguide/ode.html. Accessed: 2019-08-15.Google ScholarGoogle Scholar
  12. Taabish Gulzar, Anand Singh, Dinesh Kumar Rajoriya, and Najma Farooq. 2014. A systematic analysis of automatic speech recognition: an overview. Int. J. Curr. Eng. Technol 4, 3 (2014), 1664--1675.Google ScholarGoogle Scholar
  13. M. Han, C. Lin, and K. Song. 2013. Robotic Emotional Expression Generation Based on Mood Transition and Personality Model. IEEE Transactions on Cybernetics 43, 4 (Aug 2013), 1290--1303. Google ScholarGoogle ScholarCross RefCross Ref
  14. M. Harper. 2015. The Automatic Speech recogition In Reverberant Environments (ASpIRE) challenge. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 547--554. Google ScholarGoogle ScholarCross RefCross Ref
  15. Mary Harper. 2015. The automatic speech recogition in reverberant environments (ASpIRE) challenge. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE, 547--554.Google ScholarGoogle ScholarCross RefCross Ref
  16. C. Hutto and Eric Gilbert. 2014. VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8109. In International AAAI Conference on Web and Social Media.Google ScholarGoogle Scholar
  17. Luthffi Idzhar Ismail, Syamimi Shamsudin, Hanafiah Yussof, Fazah Akhtar Hanapiah, and Nur Ismarrubie Zahari. 2012. Robot-based intervention program for autistic children with humanoid robot NAO: initial response in stereotyped behavior. Procedia Engineering 41 (2012), 1441--1447.Google ScholarGoogle ScholarCross RefCross Ref
  18. Derek McColl and Goldie Nejat. 2014. Recognizing Emotional Body Language Displayed by a Human-like Social Robot. International Journal of Social Robotics 6 (04 2014). Google ScholarGoogle ScholarCross RefCross Ref
  19. E. Mower, M. J. Mataric, and S. Narayanan. 2011. A Framework for Automatic Human Emotion Classification Using Emotion Profiles. IEEE Transactions on Audio, Speech, and Language Processing 19, 5 (July 2011), 1057--1070. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs Up?: Sentiment Classification Using Machine Learning Techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10 (EMNLP '02). Association for Computational Linguistics, Stroudsburg, PA, USA, 79--86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. S.-T Park, Lilia Moshkina, and Ronald Arkin. 2010. Recognizing Nonverbal Affective Behavior in Humanoid Robots. Intelligent Autonomous Systems 11, IAS 2010 (01 2010). Google ScholarGoogle ScholarCross RefCross Ref
  22. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. W. Pennebaker, R. L. Boyd, K. Jordan, and K. Blackburn. 2015. The development and psychometric properties of LIWC2015. University of Texas at Austin, Austin, TX. Google ScholarGoogle ScholarCross RefCross Ref
  24. Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion. Elsevier, 3--33.Google ScholarGoogle Scholar
  25. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi Speech Recognition Toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (Hilton Waikoloa Village, Big Island, Hawaii, US). IEEE Signal Processing Society. IEEE Catalog No.: CFP11SRW-USB.Google ScholarGoogle Scholar
  26. Radim Rehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. http://is.muni.cz/publication/884893/en. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, 45--50.Google ScholarGoogle Scholar
  27. James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology 39, 6 (1980), 1161.Google ScholarGoogle ScholarCross RefCross Ref
  28. Matthias Scheutz, Thomas Williams, Evan Krause, Bradley Oosterveld, Vasanth Sarathy, and Tyler Frasca. 2019. An overview of the distributed integrated cognition affect and reflection DIARC architecture. In Cognitive Architectures. Springer, 165--193.Google ScholarGoogle Scholar
  29. Mohit Shah, Chaitali Chakrabarti, and Andreas Spanias. 2015. Within and cross-corpus speech emotion recognition using latent topic model-based features. EURASIP Journal on Audio, Speech, and Music Processing 2015, 1 (25 Jan 2015), 4. Google ScholarGoogle ScholarCross RefCross Ref
  30. M. Shah, L. Miao, C. Chakrabarti, and A. Spanias. 2013. A speech emotion recognition framework based on latent Dirichlet allocation: Algorithm and FPGA implementation. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2553--2557. Google ScholarGoogle ScholarCross RefCross Ref
  31. Syamimi Shamsuddin, Hanafiah Yussof, Luthffi Ismail, Fazah Akhtar Hanapiah, Salina Mohamed, Hanizah Ali Piah, and Nur Ismarrubie Zahari. 2012. Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In 2012 IEEE 8th International Colloquium on Signal Processing and its Applications. IEEE, 188--193.Google ScholarGoogle Scholar
  32. C. Shan, S. Gong, and P. W. McOwan. 2007. Beyond Facial Expressions: Learning Human Emotion from Body Gestures. In Proc. BMVC. 43.1--43.10. Google ScholarGoogle ScholarCross RefCross Ref
  33. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. 1631--1642.Google ScholarGoogle Scholar
  34. Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence. arXiv preprint arXiv:1903.09588 (2019).Google ScholarGoogle Scholar
  35. Yla R. Tausczik and James W. Pennebaker. 2010. The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology 29, 1 (2010), 24--54. https://doi.org/10.1177/0261927X09351676 Google ScholarGoogle ScholarCross RefCross Ref
  36. L. Tickle-Degnen, T.D. Ellis, M. Saint-Hilaire, C. Thomas, and R. C. Wagenaar. 2010. Self-management rehabilitation and health-related quality of life in Parkinson's disease: A randomized controlled trial. Movement Disorders 25 (2010), 194--204.Google ScholarGoogle ScholarCross RefCross Ref
  37. Linda Tickle-Degnen and Kathleen Doyle Lyons. 2004. Practitioners' impressions of patients with Parkinson's disease: the social ecology of the expressive mask. Social Science & Medicine 58, 3 (2004), 603--614.Google ScholarGoogle ScholarCross RefCross Ref
  38. Andrew P. Valenti, Meia Chita-Tegmark, Michael Gold, Theresa Law, and Matthias Scheutz. 2019. In Their Own Words: A Companion Robot for Detecting the Emotional State of Persons with Parkinson's Disease. In 11th International Conference on Social Robotics. Springer, Madrid, Spain, 1--10.Google ScholarGoogle ScholarCross RefCross Ref
  39. Andrew P. Valenti, Meia Chita-Tegmark, Theresa Law, Alexander W. Bock, Bradley Oosterveld, and Matthias Scheutz. 2019. Using topic modeling to infer emotional state: When your face and tone of voice don't say it all: Inferring emotional state from word semantics and conversational topics. In Workshop on Cognitive Architectures for HRI: Embodied Models of Situated Natural Language Interactions at AAMAS 2019. Montreal, Canada.Google ScholarGoogle Scholar
  40. Andrew P. Valenti, Meia Chita-Tegmark, Linda Tickle-Degnen, Alexander W. Bock, and Matthias J. Scheutz. 2019. Using topic modeling to infer the emotional state of people living with Parkinson's disease. Assistive Technology (2019), 1--10. Google ScholarGoogle ScholarCross RefCross Ref
  41. Krisztián Varga. 2017. Kaldi ASR: Extending the ASpIRE model. https://chrisearch.wordpress.com/2017/03/11/speech-recognition-using-kaldi-extending-and-using-the-aspire-model/. [Online; accessed 30-August-2019].Google ScholarGoogle Scholar
  42. JD Velsquez. 1997. Modeling emotions and other motivations in synthetic agents. Aaai/iaai (1997), 10--15.Google ScholarGoogle Scholar
  43. P. Xiaolan, X. Lun, L. Xin, and W. Zhiliang. 2013. Emotional state transition model based on stimulus and personality characteristics. China Communications 10, 6 (June 2013), 146--155.Google ScholarGoogle ScholarCross RefCross Ref
  44. Guo L. Yang, Jin H. Zhang, and Hui Sun. 2012. Design of Emotional Interaction System Based on Affective Computing Model. Applied Mechanics and Materials 198-199 (09 2012), 367. https://login.ezproxy.library.tufts.edu/login?url=https://search.proquest.com/docview/1443259693?accountid=14434 Copyright - Copyright Trans Tech Publications Ltd. Sep 2012; Last updated - 2018-10-05.Google ScholarGoogle Scholar

Index Terms

  1. Emotion expression in a socially assistive robot for persons with Parkinson's disease

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            PETRA '20: Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments
            June 2020
            574 pages
            ISBN:9781450377737
            DOI:10.1145/3389189

            Copyright © 2020 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 30 June 2020

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader