Skip to main content

Designing a Multimodal Emotional Interface in the Context of Negotiation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12200))

Abstract

This paper examines whether a virtual assistant with emotional intelligence improves the Human-Machine Interaction (HMI) in the specific use case of price negotiations. We propose a schema for an Emotional Interface, which we derive from the Skills-Rules-Knowledge (SRK)-Model and the Four Branch Model of Emotional Intelligence. According to this schema, a prototype of a virtual assistant with emotional intelligence is constructed. An avatar is used for representing the respective emotions by means of prosody and facial expression. The prototype is compared to a conventional digital assistant in a within-subject design study regarding user experience and trust building. The findings show that emotions animated in the mimic of the avatar cannot be clearly identified and attributed. Nevertheless, the User Experience of the prototype outperforms the conventional digital assistant, which is mainly due to the hedonic quality dimension. The study did not find any difference in trust between the emotional and the conventional digital assistant. This provides several interesting future research directions which are outlined in this paper.

Supported by organization BaCaTec.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alex, S.B., Babu, B.P., Mary, L.: Utterance and syllable level prosodic features for automatic emotion recognition. In: Intergovernmental Panel on Climate Change (ed.) 2018 IEEE Recent Advances in Intelligent Computational Systems (RAICS), vol. 53, pp. 31–35. IEEE, Cambridge (2018). https://doi.org/10.1109/RAICS.2018.8635059

  2. Bänziger, T., Hosoya, G., Scherer, K.R.: Path models of vocal emotion communication. PLoS One 10(9), 1–29 (2015). https://doi.org/10.1371/journal.pone.0136675

    Article  Google Scholar 

  3. Barrett, L.F.: Solving the emotion paradox: categorization and the experience of emotion. Pers. Soc. Psychol. Rev. 10(1), 20–46 (2006). https://doi.org/10.1207/s15327957pspr1001_2

    Article  Google Scholar 

  4. Buechel, S., Hahn, U.: Emotion representation mapping for automatic lexicon construction (mostly) performs on human level. Technical report, Jena University Language & Information Engineering (JULIE) Lab, Jena, Germany, June 2018. http://arxiv.org/abs/1806.08890

  5. Cassell, J., et al.: Embodiment in conversational interfaces: rea. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 520–527 (1999). https://doi.org/10.1145/302979.303150

  6. Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548 (2019). https://doi.org/10.1016/j.future.2018.01.055

    Article  Google Scholar 

  7. Crumpton, J., Bethel, C.L.: A survey of using vocal prosody to convey emotion in robot speech. Int. J. Social Robot. 8(2), 271–285 (2016). https://doi.org/10.1007/s12369-015-0329-4

    Article  Google Scholar 

  8. Daz Productions: Genesis 8—3D Models and 3D Software by Daz 3D (2019). https://www.daz3d.com/genesis8

  9. De Melo, C.M., Carnevale, P., Gratch, J.: The effect of expression of anger and happiness in computer agents on negotiations with humans. In: The 10th International Conference on Autonomous Agents and Multiagent Systems, vol. 3, pp. 2–6 (2011)

    Google Scholar 

  10. De Melo, C.M., Gratch, J., Carnevale, P.J.: Humans versus computers: impact of emotion expressions on people’s decision making. IEEE Trans. Affect. Comput. 6(2), 127–136 (2015). https://doi.org/10.1109/TAFFC.2014.2332471

    Article  Google Scholar 

  11. De Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., De Carolis, B.: From Greta’s mind to her face: modelling the dynamics of affective states in a conversational embodied agent. Int. J. Hum. Comput. Stud. 59(1–2), 81–118 (2003). https://doi.org/10.1016/S1071-5819(03)00020-X

    Article  Google Scholar 

  12. Dhall, A., Ramana Murthy, O.V., Goecke, R., Joshi, J., Gedeon, T.: Video and image based emotion recognition challenges in the wild: EmotiW 2015. In: ICMI 2015 - Proceedings of the 2015 ACM International Conference on Multimodal Interaction, pp. 423–426 (2015). https://doi.org/10.1145/2818346.2829994

  13. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003). https://doi.org/10.1016/S1071-5819(03)00038-7

    Article  Google Scholar 

  14. Ekman, P.: What scientists who study emotion agree about. Perspect. Psychol. Sci. 11(1), 31–34 (2016). https://doi.org/10.1177/1745691615596992

    Article  Google Scholar 

  15. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124–129 (1971). https://doi.org/10.1037/h0030377

    Article  Google Scholar 

  16. Feldmaier, J.: Perspectives on the connection of psychological models of emotion and intelligent machines. Ph.D. thesis, Technical University of Munich (2017)

    Google Scholar 

  17. Feldmaier, J., Diepold, K.: Path-finding using reinforcement learning and affective states. In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 543–548. IEEE, August 2014. https://doi.org/10.1109/ROMAN.2014.6926309

  18. Franke, T., Attig, C., Wessel, D.: Assessing affinity for technology interaction - the affinity for technology assessing affinity for technology interaction ( ATI ). Technical report, Unpublished manuscript (2017). https://doi.org/10.13140/RG.2.2.28679.50081

  19. Gazzaniga, M., Ivry, R.B., Mangun, G.R.: Cognitive Neuroscience: The Biology of the Mind, 4th edn. W. W. Norton, New York (2014)

    Google Scholar 

  20. Gebhard, P.: ALMA - a layered model of affect. In: Proceedings of the International Conference on Autonomous Agents, pp. 177–184 (2005)

    Google Scholar 

  21. Google: Cloud Text-to-Speech (2019). https://cloud.google.com/text-to-speech/

  22. Gratch, J., DeVault, D., Lucas, G.: The benefits of virtual humans for teaching negotiation. In: Traum, D., Swartout, W., Khooshabeh, P., Kopp, S., Scherer, S., Leuski, A. (eds.) IVA 2016. LNCS (LNAI), vol. 10011, pp. 283–294. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47665-0_25

    Chapter  Google Scholar 

  23. Gratch, J., Nazari, Z., Johnson, E.: The misrepresentation game: how to win at negotiation while seeming like a nice guy. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 728–737 (2016)

    Google Scholar 

  24. Hanson, D., et al.: Zeno: a cognitive character. AAAI Workshop - Technical report, pp. 9–11 (2008)

    Google Scholar 

  25. Hassenzahl, M., Diefenbach, S., Göritz, A.: Needs, affect, and interactive products - facets of user experience. Interact. Comput. 22(5), 353–362 (2010)

    Article  Google Scholar 

  26. Huang, K.Y., Wu, C.H., Hong, Q.B., Su, M.H., Chen, Y.H.: Speech emotion recognition using deep neural network considering verbal and nonverbal speech sounds. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2019-May, pp. 5866–5870 (2019). https://doi.org/10.1109/ICASSP.2019.8682283

  27. Kahou, S.E., Michalski, V., Konda, K., Memisevic, R., Pal, C.: Recurrent neural networks for emotion recognition in video. In: ICMI 2015 - Proceedings of the 2015 ACM International Conference on Multimodal Interaction, pp. 467–474. Association for Computing Machinery, Inc., November 2015. https://doi.org/10.1145/2818346.2830596

  28. Kim, K., Boelling, L., Haesler, S., Bailenson, J., Bruder, G., Welch, G.F.: Does a digital assistant need a body? The influence of visual embodiment and social behavior on the perception of intelligent virtual agents in AR. In: Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2018, pp. 105–114 (2019). https://doi.org/10.1109/ISMAR.2018.00039

  29. Kowalczuk, Z., Czubenko, M.: Emotions embodied in the SVC of an autonomous driver system. IFAC-PapersOnLine 50(1), 3744–3749 (2017). https://doi.org/10.1016/j.ifacol.2017.08.573

    Article  Google Scholar 

  30. Lerner, J.S., Li, Y., Valdesolo, P., Kassam, K.S.: Emotion and decision making. Ann. Rev. Psychol. 66(1), 799–823 (2015). https://doi.org/10.1146/annurev-psych-010213-115043

    Article  Google Scholar 

  31. Li, Y., Tao, J., Schuller, B., Shan, S., Jiang, D., Jia, J.: MEC 2017: multimodal emotion recognition challenge. In: 2018 1st Asian Conference on Affective Computing and Intelligent Interaction, ACII Asia (2018). https://doi.org/10.1109/ACIIAsia.2018.8470342

  32. MacDorman, K.: Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: an exploration of the uncanny valley. In: ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science (2006)

    Google Scholar 

  33. Mavridis, N.: A review of verbal and non-verbal human-robot interactive communication. Robot. Auton. Syst. 63(P1), 22–35 (2015). https://doi.org/10.1016/j.robot.2014.09.031

    Article  MathSciNet  Google Scholar 

  34. Mayer, J.D.: What is emotional intelligence? Technical report, UNH Personality Lab (2004)

    Google Scholar 

  35. McCroskey, J., Teven, J.: Source credibility measures. Meas. Instr. Database Soc. Sci. (2013). https://doi.org/10.13072/midss.536

  36. Microsoft: Luis - Language Understanding (2019). https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/

  37. Microsoft: Speech-to-Text (2019). https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/

  38. Moerland, T.M., Broekens, J., Jonker, C.M.: Emotion in reinforcement learning agents and robots: a survey. Mach. Learn. 107, 443–480 (2018). https://doi.org/10.1007/s10994-017-5666-0

    Article  MathSciNet  MATH  Google Scholar 

  39. Mori, M., MacDorman, K.F., Kageki, N.: The uncanny valley. IEEE Robot. Autom. Mag. 19(2), 98–100 (2012). https://doi.org/10.1109/MRA.2012.2192811

    Article  Google Scholar 

  40. Mudrick, N.V., Taub, M., Azevedo, R., Rowe, J., Lester, J.: Toward affect-sensitive virtual human tutors: the influence of facial expressions on learning and emotion. In: 2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017, vol. 2018-Janua, pp. 184–189. IEEE, October 2018. https://doi.org/10.1109/ACII.2017.8273598

  41. Norman, D.A., Ortony, A., Russell, D.M.: Affect and machine design: lessons for the development of autonomous machines. IBM Syst. J. 42(1), 38–44 (2003). https://doi.org/10.1147/sj.421.0038

    Article  Google Scholar 

  42. Oatley, K., Johnson-Laird, P.N.: Cognitive approaches to emotions. Trends Cogn. Sci. 18(3), 134–140 (2014). https://doi.org/10.1016/j.tics.2013.12.004

    Article  Google Scholar 

  43. Oh, S.Y., Bailenson, J., Krämer, N., Li, B.: Let the Avatar Brighten your smile: effects of enhancing facial expressions in virtual environments. PLOS One 11(9), e0161794 (2016). https://doi.org/10.1371/journal.pone.0161794. http://dx.plos.org/10.1371/journal.pone.0161794

    Article  Google Scholar 

  44. Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, New York (1990)

    Google Scholar 

  45. Phelps, E.A., Lempert, K.M., Sokol-Hessner, P.: Emotion and decision making: multiple modulatory neural circuits. Annu. Rev. Neurosci. 37(1), 263–287 (2014). https://doi.org/10.1146/annurev-neuro-071013-014119

    Article  Google Scholar 

  46. Piana, S., Stagliańo, A., Odone, F., Camurri, A.: Adaptive body gesture representation for automatic emotion recognition. ACM Trans. Interact. Intell. Syst. 6(1), 1–31 (2016). https://doi.org/10.1145/2818740

    Article  Google Scholar 

  47. Picard, R.: Affective computing. Technical report 321, MIT Media Laboratory Perceptual Computing, Cambridge, Mass. (1995). https://affect.media.mit.edu

  48. Provoost, S., Lau, H.M., Ruwaard, J., Riper, H.: Embodied conversational agents in clinical psychology: a scoping review (2017). https://doi.org/10.2196/jmir.6553

  49. Rasmussen, J.: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybern. 3, 257–266 (1983)

    Article  Google Scholar 

  50. Riedl, R., Mohr, P., Kenning, P., Davis, F., Heekeren, H.: Trusting humans and avatars: behavioral and neural evidence. In: ICIS 2011 Proceedings, pp. 1–23 (2011)

    Google Scholar 

  51. Russell, J.A.: Core affect and the psychological construction of emotion. Psychol. Rev. 110(1), 145–172 (2003). https://doi.org/10.1037/0033-295X.110.1.145

    Article  Google Scholar 

  52. Salah, A.A., Kaya, H., Gürpınar, F.: Video-based emotion recognition in the wild. In: Multimodal Behavior Analysis in the Wild, pp. 369–386. Elsevier (2019). https://doi.org/10.1016/b978-0-12-814601-9.00031-6

  53. Saldien, J., Goris, K., Vanderborght, B., Vanderfaeillie, J., Lefeber, D.: Expressing emotions with the social robot probo. Int. J. Soc. Robot. 2(4), 377–389 (2010). https://doi.org/10.1007/s12369-010-0067-6

    Article  Google Scholar 

  54. Scherer, K.R., Moors, A.: The emotion process: event appraisal and component differentiation. Ann. Rev. Psychol. 70(1), 719–745 (2019). https://doi.org/10.1146/annurev-psych-122216-011854

    Article  Google Scholar 

  55. Schrepp, M., Hinderks, A., Thomaschewski, J.: Construction of a benchmark for the user experience questionnaire (UEQ). Int. J. Interact. Multimed. Artif. Intell. 4(4), 40 (2017). https://doi.org/10.9781/ijimai.2017.445

    Article  Google Scholar 

  56. Schrepp, M., Hinderks, A., Thomaschewski, J.: Design and evaluation of a short version of the user experience questionnaire (UEQ-S). Int. J. Interact. Multimed. Artif. Intell. 4(6), 103 (2017). https://doi.org/10.9781/ijimai.2017.09.001

    Article  Google Scholar 

  57. Soleymani, M., Asghari-Esfeden, S., Fu, Y., Pantic, M.: Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Trans. Affect. Comput. 7(1), 17–28 (2016). https://doi.org/10.1109/TAFFC.2015.2436926. http://ieeexplore.ieee.org/document/7112127/

    Article  Google Scholar 

  58. Stevens, C.J., Pinchbeck, B., Lewis, T., Luerssen, M., Pfitzner, D., Powers, D.M.W., Abrahamyan, A., Leung, Y., Gibert, G.: Mimicry and expressiveness of an ECA in human-agent interaction: familiarity breeds content!. Comput. Cogn. Sci. 2(1), 1–14 (2016). https://doi.org/10.1186/s40469-016-0008-2

    Article  Google Scholar 

  59. Tanaka, H., Negoro, H., Iwasaka, H., Nakamura, S.: Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders. PLoS One 12(8), 1–16 (2017)

    Google Scholar 

  60. Valstar, M., et al.: AVEC 2016 - depression, mood, and emotion recognition workshop and challenge. In: AVEC 2016 - Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, co-located with ACM Multimedia 2016, pp. 3–10. Association for Computing Machinery, Inc., October 2016. https://doi.org/10.1145/2988257.2988258

  61. Verma, G.K., Tiwary, U.S.: Affect representation and recognition in 3D continuous valence-arousal-dominance space. Multimed. Tools Appl. 76(2), 2159–2183 (2017). https://doi.org/10.1007/s11042-015-3119-y. http://dx.doi.org/10.1007/s11042-015-3119-y

    Article  Google Scholar 

  62. de Visser, E.J., et al.: Almost human: anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol.: Appl. 22(3), 331–349 (2016). https://doi.org/10.1037/xap0000092

    Article  Google Scholar 

  63. W3C: Speech Synthesis Markup Language (SSML) Version 1.1 (2010). https://www.w3.org/TR/speech-synthesis11/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabian Pelzl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pelzl, F., Diepold, K., Auernhammer, J. (2020). Designing a Multimodal Emotional Interface in the Context of Negotiation. In: Marcus, A., Rosenzweig, E. (eds) Design, User Experience, and Usability. Interaction Design. HCII 2020. Lecture Notes in Computer Science(), vol 12200. Springer, Cham. https://doi.org/10.1007/978-3-030-49713-2_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49713-2_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49712-5

  • Online ISBN: 978-3-030-49713-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics