Abstract
In this paper, we describe a smartphone application that aims at motivating users to use facial expressions. This has a twofold goal: to reintroduce the use of facial expressions as nonverbal means in the computer-mediated communication of emotions and to provide the opportunity for self-reflection about the personal emotional states while fostering smiles in order to improve mental wellbeing. This paper provides a description of the developed prototype and reports the results of a first observation study conducted during an interactive event.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction and Background
The technological revolution thoroughly changed the way people communicate. There has been a shift from the physical to the virtual world and, nowadays, most of the human interaction is computer mediated. A consequence of this shift towards a more and more important role of the human-to-computer-to-human interaction (HCHI) is that all the nonverbal part of a normal face-to-face communication, such as paralanguage and kinesics, is totally missing. In particular, kinesics is the use of body motion communication such as facial expressions and gestures. The use of facial expressions is particularly important not only to convoy richer information in human communication but it also plays a crucial role in people’s mental wellbeing. Indeed, in psychology, it is well known the principle that performing facial expressions directly influences our emotional state [11]. In fact, the James-Lange theory of emotion states that life experiences produce a direct physiological response via the human nervous system and the emotions occur as a consequence of these changes, rather than being the cause of them. In particular, Kleinke et al. conducted an experiment that produced interesting results confirming the aforementioned theory [13]. In fact, they found out that the test subjects reported an increased positive mood while performing positive facial expressions and a less positive experience while performing negative facial expressions. This was true even if the experiment participants were only mimicking these facial expressions. Moreover, the effects were enhanced when participants could see their reflection in a mirror. These findings become particularly relevant when associated to the benefits that smiling provide. In fact, smiling activates the release of neuropeptides that can help in fighting stress [18]. Moreover, smiling is also associated to the release of dopamine, endorphins and serotonin, which function as natural pain-reliever [15] and as anti-depressant [12].
In this paper, we present an application for smartphones that enables users to communicate in Social Awareness Streams (SAS) using facial expressions to share their emotional state. The main contribution of this work is twofold: (1) the introduction of kinesics in affective HCHI; (2) empowering users for the self-monitoring of their emotional states in order to support the expression of positive emotions. Through these two axes, the application allows augmenting the human expressivity for the communication of emotional states in CMC. This could also augment users’ social skills in off-line interaction and, finally, their happiness. This interface has been implemented in a mobile environment because currently a big part of the population owns and uses the smartphone, which also provides ubiquitous interaction and continuous connection. Indeed, the smartphone can be considered a pervasive technology that deeply penetrated the current society and that, therefore, can effectively influence users’ daily life.
The next section presents the related work; in the third section, the prototype interface is described in detail. The fourth section is dedicated to the observation study we conducted, while the last section presents the conclusion and future work.
2 Related Work
Positive technology aims at creating technologies that contribute to enhance happiness and psychological wellbeing. In [16], Riva et al. proposed a framework to classify positive technology according to their effect on three main features of personal experience:
-
Hedonic: technologies used to induce positive and pleasant experiences;
-
Eudaimonic: technologies used to support individuals in reaching engaging and self-actualizing experiences;
-
Social/Interpersonal: technologies used to support and improve social integration and/or connectedness between individuals, groups, and organizations.
The smartphone application introduced in this paper integrates the Hedonic and Social/Interpersonal features. For this reason, we present the related works concerning these features and using facial expressions as main means of emotion communication.
2.1 Social/Interpersonal Feature
Facial expressions provide an important channel for emotional and social display and some works tried to introduce this nonverbal interaction in computer-mediated communication. For example, FAIM captures users’ spontaneous facial expressions via a video and displays them as dynamic avatars in instant messaging software [7]. Similarly, Kuber and Wright developed an instant messaging application that recognized facial expressions through the Emotiv EPOC headset and displayed the users’ emotional states with smileys [14]. This study showed that this approach improved affective interaction compared with normal text-based communication. Caon et al. used the same technology for the facial expression recognition but they applied this concept to the communication over the SASs and the feedback was displayed in the environment in a context-aware multimodal manner [4].
2.2 Hedonic Feature
The notion that inducing a person to perform a smile can make her happier encouraged some researchers to propose novel interfaces that aim at promoting smiles. For example, Tsujita and Rekimoto presented a variety of digital appliances, such as refrigerators, alarm clocks, mirrors et cetera that require the user to smile in order to function properly [20]. In [19], they described two field tests that they conducted with the refrigerator. This refrigerator integrated a camera to recognize and count the user’s smiles, and only when the user smiled, the system facilitated opening the refrigerator door. These field tests showed that this system motivated users to smile more frequently, provided a tool for self-reflection and fostered communication in a couple. Hori et al. proposed a system composed of a smartphone and a wearable device, called communication pedometer, which applied the gamification approach to promote smiling during interpersonal communication [10]. The “Moodmeter” is a system that aimed at encouraging smiles in public spaces [9]. It was composed of a camera that recognized by-standers smiles and displayed it on a public display. This set up was replicated in many spots of the campus and it was possible to display a heat-map to show which spot was the “happiest”. Another interesting work in this field is the mirror that manipulates the user facial expression [22]; the authors of this work demonstrated that it is possible to manipulate emotional states through feedback of deformed facial expressions in real time.
The smartphone application presented in this paper proposes an interface that enables users to use facial expressions to share emotional information. At the same time, like the work made by Tsujita and Rekimoto [20], aims at fostering smiles.
3 Prototype
The smartphone application presented in this paper has been designed to support the first and third features of the positive technology framework [16]. Indeed, it allows the user to use facial expressions to communicate her emotional state in SASs. Since this application aims at improving the computer-mediated interaction, it could also enhance the feeling of connectedness between the elements of a social group. At the same time, this application provides some feedback about the performed facial expressions and reminds the user to smile in order to induce positive emotions and increase happiness.
3.1 Interface for Affective Interaction
This part of the interface aims at enhancing the computer-mediated communication of emotions and can be classified in the Social/interpersonal feature of the framework of the positive technologies. The SASs, such as Twitter, do not allow a variety of different modalities and have also a limited number of characters per message; therefore, the expressivity of emotions in SASs is quite limited. We developed this app in order to provide ubiquitous sharing of emotions in SASs (i.e., Twitter) through text, images and emotions (Fig. 1(b)). This app induces the user to communicate her emotional state performing the associated facial expression using the front camera embedded in the smartphone. In particular, when a user wants to share a message in Twitter, she can open the app where she will find the interface to enter the text, to browse the pictures and, before sending the message, the app asks the user if she wants to share an emotion in this message. If she agrees, the app will show the view from the front camera in order to allow the user to focus on her face and on the emotion detected by the system, as depicted in Fig. 1(a). The algorithm for the user’s facial expression recognition is able to recognize four different facial expressions associated to four emotions: smiling for happiness, frowning for sadness, scowling for anger, and winking for trust. The facial expression recognition has been implemented with the OpenCV library [1]. We used a supervised learning approach, where the user trains the system before the first utilization performing three times each expression. For the recognition phase, a similarity algorithm compares the ORB feature descriptors [17] for the mouth and eye regions. These regions are extracted using the Haar feature-based cascade classifiers proposed by Viola and Jones [21]. The app provides also an overview of the sent Tweets with the text, image and a smiley to represent the shared emotional state (Fig. 1(b)).
The choice of giving to the user the opportunity of communicating her current emotional state using the facial expression aims at reintroducing the use of the kinesics typical of the nonverbal communication. In fact, the facial expressions and the gestures are very meaningful means of expression in the face-to-face communication and they add more information to the spoken sentences. In particular, affection not only allows experiencing a richer interaction but also helps to disambiguate meaning, facilitating a more effective communication [2].
3.2 Interface for Mental Wellbeing
This part of the interface aims at inducing positive emotions to increase the user’s happiness and can be classified according to the hedonic feature of the positive technology framework. This application provides some statistics as feedback for user empowerment; indeed, they help users explore and understand information about themselves to support self-reflection and to identify opportunities for behavior change [6]. These statistics take into account the number of facial expressions performed during each day. Then, it is possible to show the number of smiles per day as a graph (Fig. 1(c)) and the percentage of all the performed facial expressions in a pie chart (Fig. 1(d)). Moreover, this application integrates a persuasive interface that solicits the user to smile when the smile counter detects a low level of happiness. In this case, the application triggers an alarm shown in the notification bar (Fig. 2(a)) and when the user acknowledges it, an interface showing funny or positive images coming from an RSS feed chosen by the user is displayed (Fig. 2(b)). In this screen, a progression bar shows how much the user should smile in order to accomplish this task. However, the user can access the RSS feed whenever she wants and when she smiles, the images are saved in the smartphone to allow the user to watch them again (Fig. 2(c)). This persuasive interface aims at inducing the user to smile in order to make her happier, as the works presented in [10] and [20]. In fact, as explained in Sect. 1, it is not important if the smile is unconscious or voluntary to increase positive mood, and the possibility to see the user’s face smiling like in a mirror can even increase this effect. Indeed, as for the hedonic feature of positive technology, this interface aims at enhancing happiness. Moreover, this tool aims at supporting self-reflection to train the self-regulation of emotions, which also contributes to mental health [8].
4 Observation Study
We performed the observation study during a live demonstration held during the conference on Affective Computing and Intelligent Interaction (ACII 2013) at the International Conference Centre of Geneva in Switzerland on 4 September 2013. We set up a prototype composed of multiple input and output interfaces for the communication of emotional states [5]. In particular, the setup comprised two main input interfaces: a touch-enabled Smart TV using the Microsoft Kinect for the facial expression recognition and the Android smartphone running the application we developed. We conducted an observation study both on the field and with the recorded video; other findings were presented in [3]. Watching the video, we calculated the number of conference attendees involved in the interactive demonstration: 85 people stopped to watch the demo and 29 of them actually interacted with the prototype. The people attending the conference were of different ethnics, genders and ages; although all of them were interested in affective computing, there were researchers and practitioners with very heterogeneous backgrounds, providing a varied pool of subjects for our first public test. The testers generally demonstrated a large interest in the proposed prototype. In particular, the interaction through facial expressions was appreciated, even if considered too artificial. In fact, this kind of interaction implies that the user has to express her emotional state through voluntary facial expressions that in this case can be seen more as a gesture rather than an unconscious physiological reaction; for this reason, some participants suggested to integrate an automatic facial expression recognition that detects the unconscious expressions and to propose to the user to share the detected emotional status on the SAS. The interaction with the persuasive interface to motivate the user to smile provided the same result: the users felt of being “faking” a smile in order to save the images they liked. However, as explained in Sect. 1, mimicking positive facial expression is an effective means to increase the user’s positive mood.
The most important research question we were trying to answer in this interactive event was to understand if performing facial expressions could be a problem in terms of user acceptability in a public context. In fact, using facial expressions to communicate with the mobile phone could be seen as awkward by the user, in particular in a social context like this event, where users were surrounded by a crowd that could be composed of friends, colleagues and strangers. The result after this first acceptability test is quite encouraging; indeed, nobody refused to perform facial expressions in front of the mobile phone for the communication of emotional states. This makes us think that in the current society, where “selfies” are a main trend invading the social networks, the use of a camera as means of communication is considered acceptable even in public spaces. We can consider these preliminary results encouraging, although we are aware that further tests are needed.
5 Conclusion and Future Work
In this paper, we presented a smartphone application integrating two features of positive technology: a Social/Interpersonal interface that aims at introducing a part of kinesics, i.e., facial expressions, in the computer-mediated communication of emotions. Moreover, this prototype takes advantage of the recorded emotions shared by the user in order to provide some statistics to enable self-reflection about the shared emotions; this aims at improving mental health and if the user did not smile enough during the day, a persuasive interface inspired by Tsujita and Rekimoto’s work [19], motivates the user to smile.
As future work, we are integrating a gamification mechanism, like in Hori et al.’s work [10], to sustain user’s engagement and to create an active community of people willing to improve their affective interaction and mental wellbeing.
References
Bradski, G.: The opencv library. Doctor Dobbs J. 25(11), 120–126 (2000)
Brave S, Nass C.: Emotion in human–computer interaction. In: The Human–Computer Interaction Handbook, pp. 81–93 (2003)
Caon, M., Angelini, L., Khaled, O.A., Lalanne, D., Yue, Y., Mugellini, E.: Affective interaction in smart environments. Procedia Comput. Sci. 32, 1016–1021 (2014)
Caon, M., Angelini, L., Yue, Y., Khaled, O.A., Mugellini, E.: Context-aware multimodal sharing of emotions. In: Kurosu, Masaaki (ed.) HCII/HCI 2013, Part V. LNCS, vol. 8008, pp. 19–28. Springer, Heidelberg (2013)
Caon, M., Khaled, O.A., Mugellini, E., Lalanne, D., Angelini, L.: Ubiquitous interaction for computer mediated communication of emotions. In: Proceedings of ACII 2013, pp. 717–718 (2013)
Dey, A.: Persuasive technology or explorative technology? In: Berkovsky, S., Freyne, J. (eds.) PERSUASIVE 2013. LNCS, vol. 7822, p. 1. Springer, Heidelberg (2013)
El Kaliouby, R., Robinson, P.: FAIM: integrating automated facial affect analysis in instant messaging. In: Proceedings of IUI 2004, pp. 244–246. ACM (2004)
Gross, J.J., Muñoz, R.F.: Emotion regulation and mental health. Clin. Psychol. Sci. Pract. 2(2), 151–164 (1995)
Hernandez, J., Hoque, M.E., Drevo, W., Picard, R.W.: Mood meter: counting smiles in the wild. In: Proceedings of UbiComp 2012, pp. 301–310. ACM (2012)
Hori, Y., Tokuda, Y., Miura, T., Hiyama, A., Hirose, M.: Communication pedometer: a discussion of gamified communication focused on frequency of smiles. In: Proceedings of AH 2013, pp. 206–212. ACM (2013)
James, W.: The Principles of Psychology, vol. 2. Dover Publications, New York (1950)
Karren, K.J., Smith, L., Gordon, K.J.: Mind/body health: The effects of attitudes, emotions, and relationships. Pearson Higher Ed (2013)
Kleinke, C.L., Peterson, T.R., Rutledge, T.R.: Effects of self-generated facial expressions on mood. J. Pers. Soc. Psychol. 74, 272–279 (1998)
Kuber, R., Wright, F.P.: Augmenting the instant messaging experience through the use of brain-computer interface and gestural technologies. Int. J. Hum. Comput. Interact. 29(3), 178–191 (2013)
Lane, R.D.: Neural correlates of conscious emotional experience. In: Cognitive neuroscience of emotion, pp. 345–370 (2000)
Riva, G., Banos, R.M., Botella, C., Wiederhold, B.K., Gaggioli, A.: Positive technology: using interactive technologies to promote positive functioning. Cyberpsychol. Behav. Soc. Netw. 15(2), 69–77 (2012)
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: Proceedings of ICCV 2011, pp. 2564–2571. IEEE (2011)
Seaward, B.: Managing Stress: Principles and Strategies for Health and Well-Being. Jones & Bartlett Publishers, Boston (2008)
Tsujita, H., Rekimoto, J.: Smiling makes us happier: enhancing positive mood and communication with smile-encouraging digital appliances. In: Proceedings of UbiComp 2011, pp. 1–10. ACM (2011)
Tsujita, H., Rekimoto, J.: Smile-encouraging digital appliances. IEEE Pervasive Comput. 12(4), 5–7 (2013)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of CVPR 2001, vol. 1, pp. I-511–I-518. IEEE (2001)
Yoshida, S., Tanikawa, T., Sakurai, S., Hirose, M., Narumi, T.: Manipulation of an emotional experience by real-time deformed facial feedback. In: Proceedings of AH 2013, pp. 35–42. ACM (2013)
Acknowledgments
We want to thank Alexandre Nussbaumer and Hoang-Linh Nguyen for their fundamental contribution to this work, which has been supported by Hasler Foundation in the framework of “Living in Smart Environments” project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Caon, M., Angelini, L., Carrino, S., Khaled, O.A., Mugellini, E. (2015). A Smartphone Application to Promote Affective Interaction and Mental Health. In: Kurosu, M. (eds) Human-Computer Interaction: Design and Evaluation. HCI 2015. Lecture Notes in Computer Science(), vol 9169. Springer, Cham. https://doi.org/10.1007/978-3-319-20901-2_43
Download citation
DOI: https://doi.org/10.1007/978-3-319-20901-2_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20900-5
Online ISBN: 978-3-319-20901-2
eBook Packages: Computer ScienceComputer Science (R0)