skip to main content
10.1145/3242587.3242629acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Orecchio: Extending Body-Language through Actuated Static and Dynamic Auricular Postures

Published: 11 October 2018 Publication History

Abstract

In this paper, we propose using the auricle - the visible part of the ear - as a means of expressive output to extend body language to convey emotional states. With an initial exploratory study, we provide an initial set of dynamic and static auricular postures. Using these results, we examined the relationship between emotions and auricular postures, noting that dynamic postures involving stretching the top helix in fast (e.g., 2Hz) and slow speeds (1Hz) conveyed intense and mild pleasantness while static postures involving bending the side or top helix towards the center of the ear were associated with intense and mild unpleasantness. Based on the results, we developed a prototype (called Orrechio) with miniature motors, custom-made robotic arms and other electronic components. A preliminary user evaluation showed that participants feel more comfortable using expressive auricular postures with people they are familiar with, and that it is a welcome addition to the vocabulary of human body language.

Supplementary Material

suppl.mov (ufp1275.mp4)
Supplemental video
suppl.mov (ufp1275p.mp4)
Supplemental video
MP4 File (p697-huang.mp4)

References

[1]
Anthony P Atkinson, Winand H Dittrich, Andrew J Gemmell and Andrew W Young. 2004. Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33 (6). 717--746.
[2]
M. Z. Bjelica, B. Mrazovac, I. Papp and N. Teslic. 201 Busy flag just got better: Application of lighting effects in mediating social interruptions. In Proceedings of the 34th International Convention MIPRO, 975--980.
[3]
A Boissy, A Aubert, L Désiré, L Greiveldinger, E Delval and I Veissier. 2011. Cognitive sciences to relate ear postures to emotions in sheep. Animal Welfare, 20 (1). 47.
[4]
Margaret M Bradley, Brucse N Cuthbert and Peter J Lang. 1991. Startle and emotion: Lateral acoustic probes and the bilateral blink. Psychophysiology, 28 (3). 285--295.
[5]
Rafael A Calvo, Sidney D'Mello, Jonathan Gratch and Arvid Kappas. 2015. The Oxford handbook of affective computing. Oxford Library of Psychology.
[6]
Daniel Cernea and Andreas Kerren. 2015. A survey of technologies on the rise for emotion-enhanced interaction.Journal of Visual Languages & Computing,31,PartA.70--86.
[7]
Daniel Cernea, Christopher Weber, Achim Ebert and Andreas Kerren. 2013. Emotion scents: a method of representing user emotions on GUI widgets. Visualization and Data Analysis .
[8]
Daniel Cernea, Christopher Weber, Achim Ebert and Andreas Kerren. 2015. Emotion-prints: interaction-driven emotion visualization on multi-touch interfaces. In Proceedings of SPIE/IS&T Electronic Imaging, SPIE, 93970A.
[9]
Daniel Cernea, Christopher Weber, Andreas Kerren and Achim Ebert. 2014. Group Affective Tone Awareness and Regulation through Virtual Agents. In Proceeding of IVA 2014 Workshop on Affective Agents, Boston, MA, USA, 27--29 August, 2014, 9--16.
[10]
Charles Darwin. 1998. The expression of the emotions in man and animals. Oxford University Press, USA.
[11]
Beatrice de Gelder and Nouchine Hadjikhani. 2006. Non-conscious recognition of emotional body language. NeuroReport, 17 (6). 583--586.
[12]
Laura Devendorf, Joanne Lo, Noura Howell, Jung Lin Lee, Nan-Wei Gong, M. Emre Karagozler, Shiho Fukuhara, Ivan Poupyrev, Eric Paulos and Kimiko Ryokai. 2016. "I don't Want to Wear a Screen": Probing Perceptions of and Possibilities for Dynamic Displays on Clothing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, Santa Clara, California, USA, 6028--6039.
[13]
Necomimi - Brainwave Cat Ears. https://store.necomimi.com/
[14]
Paul Ekman. 2006. Darwin and facial expression: A century of research in review. Ishk.
[15]
Barrett Ens, Tovi Grossman, Fraser Anderson, Justin Matejka and George Fitzmaurice. 2015. Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, ACM, Charlotte, NC, USA, 467--476.
[16]
Kathryn Finlayson, Jessica Frances Lampe, Sara Hintze, Hanno Würbel and Luca Melotti. 2016. Facial Indicators of Positive Emotions in Rats. PloS one, 11 (11).e0166446.
[17]
Kyosuke Fukuda. 2001. Eye blinks: new indices for the detection of deception. International Journal of Psychophysiology, 40 (3). 239--245.
[18]
Mayank Goel, Leah Findlater and Jacob Wobbrock. 2012. WalkType: using accelerometer data to accomodate situational impairments in mobile touch screen text entry In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Austin, Texas, USA, 2687--2696.
[19]
V. Goverdovsky, D. Looney, P. Kidmose and D. P. Mandic. 2016. In-Ear EEG From Viscoelastic Generic Earpieces: Robust and Unobtrusive 24/7 Monitoring. IEEE Sensors Journal, 16 (1). 271--277.
[20]
Saul Greenberg, Michael Boyle and Jason Laberge. 1999. PDAs and shared public displays: Making personal information public, and public information personal. Personal Technologies, 3 (1). 54--64.
[21]
M. Melissa Gross, Elizabeth A. Crane and Barbara L. Fredrickson. 2010. Methodology for Assessing Bodily Expression of Emotion. Journal of Nonverbal Behavior, 34 (4). 223--248.
[22]
Chris Harrison, John Horstman, Gary Hsieh and Scott Hudson. 2012. Unlocking the expressivity of point lights. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1683--1692.
[23]
Chris Harrison, Gary Hsieh, Karl DD Willis, Jodi Forlizzi and Scott E Hudson. 2011. Kineticons: using iconographic motion in graphical user interface design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1999--2008.
[24]
Masashi Hasegawa, Nobuyo Ohtani and Mitsuaki Ohta. 2014. Dogs' body language relevant to learning achievement. Animals, 4 (1). 45--58.
[25]
Mariam Hassib, Max Pfeiffer, Stefan Schneegass, Michael Rohs and Florian Alt. 2017. Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, USA, 6133--6146.
[26]
Noura Howell, Laura Devendorf, Rundong Tian, Tomás Vega Galvez, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos and Kimiko Ryokai. 2016. Biosignals as Social Cues: Ambiguity and Emotional Interpretation in Social Displays of Skin Conductance. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, ACM, Brisbane, QLD, Australia, 865--870.
[27]
Viirj Kan, Katsuya Fujii, Judith Amores, Chang Long Zhu Jin, Pattie Maes and Hiroshi Ishii. 2015. Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, ACM, Stanford, California, USA, 619--624.
[28]
Rajesh K. Kana and Brittany G. Travers. 2012. Neural substrates of interpreting actions and emotions from body postures. Social Cognitive and Affective Neuroscience, 7 (4). 446--456.
[29]
Shaun K Kane. 2009. Context-enhanced interaction techniques for more accessible mobile phones. In Proceedings of ACM SIGACCESS Accessibility and Computing (93). 39--43.
[30]
Marije Kanis, Niall Winters, Stefan Agamanolis, Anna Gavin and Cian Cullinan. 2005. Toward wearable social networking with iBand. In Proceedings of CHI '05 Extended Abstracts on Human Factors in Computing Systems, ACM, Portland, OR, USA, 1521--1524.
[31]
Hsin-Liu Kao, Deborah Ajilo, Oksana Anilionyte, Artem Dementyev, Inrak Choi, Sean Follmer and Chris Schmandt. 2017. Exploring Interactions and Perceptions of Kinetic Wearables. In Proceedings of the 2017 Conference on Designing Interactive Systems, ACM, Edinburgh, United Kingdom, 391--396.
[32]
Hsin-Liu Kao, Manisha Mohan, Chris Schmandt, Joseph A. Paradiso and Katia Vega. 2016. ChromoSkin: Towards Interactive Cosmetics Using Thermochromic Pigments. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, Santa Clara, California, USA, 3703--3706.
[33]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto and Bruce H. Thomas. 2017. EarTouch: turning the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, ACM, Vienna, Austria, 1--6.
[34]
Andrea Kleinsmith, P Ravindra De Silva and Nadia Bianchi-Berthouze. 2006. Cross-cultural differences in recognizing affect from body posture. Interacting with Computers, 18 (6). 1371--1389.
[35]
Kostiantyn Kucher, Daniel Cernea and Andreas Kerren. 2016. Visualizing excitement of individuals and groups. In Proceedings of the 2016 EmoVis Conference on Emotion and Visualization, Linkoping University, Sonoma, CA, 15--22.
[36]
Xin Li, Ihab Hijazi, Reinhard Koenig, Zhihan Lv, Chen Zhong and Gerhard Schmitt. 2016. Assessing essential qualities of urban space with emotional and visual data based on gis technique. ISPRS International Journal of Geo-Information, 5 (11). 218.
[37]
James Jenn-Jier Lien, Takeo Kanade, Jeffrey F Cohn and Ching-Chung Li. 2000. Detection, tracking, and classification of action units in facial expression. Robotics and Autonomous Systems, 31 (3). 131--146.
[38]
Kristen A Lindquist, Tor D Wager, Hedy Kober, Eliza Bliss-Moreau and Lisa Feldman Barrett. 2012. The brain basis of emotion: a meta-analytic review. Behavioral and brain sciences, 35 (3). 121--143.
[39]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara and Max Mühlhä. 2014. EarPut: augmenting ear-worn devices for ear-based interaction. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: the Future of Design, ACM, Sydney, New South Wales, Australia, 300--307.
[40]
Y. Liu, O. Sourina and M. K. Nguyen. 2010. Real-Time EEG-Based Human Emotion Recognition and Visualization. In Proceedings of International Conference on Cyberworlds, 262--269.
[41]
David Matsumoto and Paul Ekman. 1989. American-Japanese cultural differences in intensity ratings of facial expressions of emotion. Motivation and Emotion, 13 (2). 143--157.
[42]
David Matsumoto, Hyi Sung Hwang, Lisa Skinner and Mark Frank. 2011. Evaluating truthfulness and detecting deception. FBI L. Enforcement Bull., 80. 1.
[43]
Bengt Mattsson and Monica Mattsson. 2002. The concept of "psychosomatic" in general practice. Reflections on body language and a tentative model for understanding. Scandinavian Journal of Primary Health Care, 20 (3). 135--138.
[44]
Matthew Mauriello, Michael Gubbels and Jon E. Froehlich. 2014. Social fabric fitness: the design and evaluation of wearable E-textile displays to support group running. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Toronto, Ontario, Canada, 2833--2842.
[45]
Hanneke K. M. Meeren, Corné C. R. J. van Heijnsbergen and Beatrice de Gelder. 2005. Rapid perceptual integration of facial expression and emotional body language. In Proceedings of the National Academy of Sciences of the United States of America, 102 (45). 16518--16523.
[46]
Albert Mehrabian. 1969. Significance of posture and position in the communication of attitude and status relationships. Psychological Bulletin, 71 (5). 359.
[47]
Albert Mehrabian. 1971. Silent Messages, Belmont, CA: Wadsworth. ISBN 0--534-00910--7.
[48]
C. Metzger, M. Anderson and T. Starner. 2004. FreeDigiter: a contact-free device for gesture control. In Proceedings of Eighth International Symposium on Wearable Computers, 18--21.
[49]
Mary E. O'Donnell. 2009. Communicative language teaching in action: Putting principles to work by BRANDL, KLAUS. The Modern Language Journal, 93 (3). 440--441.
[50]
Simon Olberding, Kian Peen Yeo, Suranga Nanayakkara and Jurgen Steimle. 2013. AugmentedForearm: exploring the design space of a display-enhanced forearm. In Proceedings of the 4th Augmented Human International Conference, ACM, Stuttgart, Germany, 9--12.
[51]
Jennifer Pearson, Simon Robinson and Matt Jones. 2015. It's About Time: Smartwatches as Public Displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, Seoul, Republic of Korea, 1257--1266.
[52]
Barbara Pease and Allan Pease. 2008. The definitive book of body language: The hidden meaning behind people's gestures and expressions. Bantam.
[53]
Esben W Pedersen, Sriram Subramanian and Kasper Hornbæk. 2014. Is my phone alive?: a large-scale study of shape change in handheld devices using videos. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, ACM, 2579--2588.
[54]
Alastair Pennycook. 1985. Actions speak louder than words: Paralanguage, communication, and education. Tesol Quarterly, 19 (2). 259--282.
[55]
Helen S Proctor and Gemma Carder. 2014. Can ear postures reliably measure the positive emotional state of cows? Applied Animal Behaviour Science, 161. 20--27.
[56]
JA Ressel. 1980. A circumplex model of affect. J. Personality and Social Psychology, 39. 1161--1178.
[57]
5Sanneke J. Schouwstra and J. Hoogstraten. 1995. Head Position and Spinal Position as Determinants of Perceived Emotional State. Perceptual and Motor Skills, 81 (2). 673--674.
[58]
Andrew Sears, Min Lin, Julie Jacko and Yan Xiao. 2003. When computers fade: Pervasive computing and situationally-induced impairments and disabilities. In Human-Computer Interaction: Theory and Practice (Part II). 1298--1302.
[59]
Andrew Sears, Mark Young and Jinjuan Feng. 2003. Physical disabilities and computing technologies: an analysis of impairments. Human Computer Interaction Designing For Deverse Users And Domains
[60]
Marcos Serrano, Barrett M. Ens and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, ACM, Toronto, Ontario, Canada, 3181--3190.
[61]
Jaime Snyder, Mark Matthews, Jacqueline Chien, Pamara F. Chang, Emily Sun, Saeed Abdullah and Geri Gay. 2015. MoodLight: Exploring Personal and Social Implications of Ambient Display of Biosensor Data. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, ACM, Vancouver, BC, Canada, 143--153.
[62]
Kiley Sobel, Alexander Fiannaca, Jon Campbell, Harish Kulkarni, Ann Paradiso, Ed Cutrell and Meredith Ringel Morris. 2017. Exploring the Design Space of AAC Awareness Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, USA, 2890--2903.
[63]
Reiner Sprengelmeyer, Andrew W. Young, Ulrike Schroeder, Peter G. Grossenbacher, Jens Federlein, Thomas Buttner and Horst Przuntek. 1999. Knowing no fear. In Proceedings of the Royal Society of London. Series B: Biological Sciences, 266 (1437). 2451.
[64]
Dag Svanaes and Martin Solheim. 2016. Wag Your Tail and Flap Your Ears: The Kinesthetic User Experience of Extending Your Body. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, Santa Clara, California, USA, 3778--3779.
[65]
Yuanyuan Tai. 2014. The Application of Body Language in English Teaching. Journal of Language Teaching & Research, 5 (5).
[66]
Anja Thieme, Helene Steiner, David Sweeney and Richard Banks. 2016. Body covers as digital display: a new material for expressions of body & self. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, ACM, Heidelberg, Germany, 927--932.
[67]
Minh Hong Tran, Yun Yang and Gitesh K Raikundalia. 2005. Supporting awareness in instant messaging: an empirical study and mechanism design. In Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, Computer-Human Interaction Special Interest Group (CHISIG) of Australia, 1--10.
[68]
Harald G Wallbott. 1998. Bodily expression of emotion. European journal of social psychology, 28 (6). 879--896.
[69]
Wouter Walmink, Alan Chatham and Florian Mueller. 2014. Interaction opportunities around helmet design. In Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems - CHI EA '14, ACM, Toronto, Ontario, Canada, 367--370.
[70]
Manuela Züger, Christopher Corley, André N. Meyer, Boyang Li, Thomas Fritz, David Shepherd, Vinay Augustine, Patrick Francis, Nicholas Kraft and Will Snipes. 2017. Reducing Interruptions at Work: A Large-Scale Field Study of FlowLight. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, Denver, Colorado, USA, 61--72.

Cited By

View all
  • (2024)SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing UsersProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675639(1-17)Online publication date: 27-Oct-2024
  • (2024)ExBreath: Explore the Expressive Breath System as Nonverbal Signs towards Semi-unintentional ExpressionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650870(1-7)Online publication date: 11-May-2024
  • (2022)Sensing with EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35503146:3(1-57)Online publication date: 7-Sep-2022
  • Show More Cited By

Index Terms

  1. Orecchio: Extending Body-Language through Actuated Static and Dynamic Auricular Postures

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      UIST '18: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
      October 2018
      1016 pages
      ISBN:9781450359481
      DOI:10.1145/3242587
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 October 2018

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. actuating human body
      2. auricle
      3. body language
      4. emotion
      5. wearable earpiece

      Qualifiers

      • Research-article

      Conference

      UIST '18

      Acceptance Rates

      UIST '18 Paper Acceptance Rate 80 of 375 submissions, 21%;
      Overall Acceptance Rate 561 of 2,567 submissions, 22%

      Upcoming Conference

      UIST '25
      The 38th Annual ACM Symposium on User Interface Software and Technology
      September 28 - October 1, 2025
      Busan , Republic of Korea

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)33
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing UsersProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675639(1-17)Online publication date: 27-Oct-2024
      • (2024)ExBreath: Explore the Expressive Breath System as Nonverbal Signs towards Semi-unintentional ExpressionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650870(1-7)Online publication date: 11-May-2024
      • (2022)Sensing with EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35503146:3(1-57)Online publication date: 7-Sep-2022
      • (2022)Morphace: An Integrated Approach for Designing Customizable and Transformative Facial Prosthetic MakeupProceedings of the Augmented Humans International Conference 202210.1145/3519391.3519406(58-67)Online publication date: 13-Mar-2022
      • (2022)Synergistic integration between internet of things and augmented reality technologies for deaf persons in e-learning platformThe Journal of Supercomputing10.1007/s11227-022-04952-z79:10(10747-10773)Online publication date: 23-Nov-2022
      • (2021)ThermEarhook: Investigating Spatial Thermal Haptic Feedback on the Auricular Skin AreaProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3479922(662-672)Online publication date: 18-Oct-2021
      • (2021)PalmBeat: A Kinesthetic Way to Feel Groove With Music12th Augmented Human International Conference10.1145/3460881.3460932(1-8)Online publication date: 27-May-2021
      • (2021)Multi-modal Spatial Object Localization in Virtual Reality for Deaf and Hard-of-Hearing People2021 IEEE Virtual Reality and 3D User Interfaces (VR)10.1109/VR50410.2021.00084(588-596)Online publication date: Mar-2021
      • (2021)Head Up Visualization of Spatial Sound Sources in Virtual Reality for Deaf and Hard-of-Hearing People2021 IEEE Virtual Reality and 3D User Interfaces (VR)10.1109/VR50410.2021.00083(582-587)Online publication date: Mar-2021
      • (2020)KissGlassProceedings of the Augmented Humans International Conference10.1145/3384657.3384801(1-5)Online publication date: 16-Mar-2020
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media