Keywords

1 Introduction

Instead of making humans adapt to the computer world, ubiquitous computing, in essence, is about technology becoming invisible and blending into the human world [28]. The concept of Tangible User Interface (TUI) [13] extended this idea by proposing to transform digital information into concrete objects, which could be done with architectural elements (e.g. walls or doors), everyday objects (e.g. books or cards), or ambient conditions (e.g. sound, light or airflow). With the same intent but with a different approach is the concept of enactive systems [14], which rejects the idea of a goal-oriented and conscious interaction. Instead, in an enactive system, the person’s body and spatial presence is the conduit that allows a non-conscious interaction with the system. The authors drew the enactive part from the concept of enaction proposed by Bruner [2], in the sense of “learning by doing”, but it also resonates with what Varela et al. [26] called enaction. In particular, considering what are the frontiers of the body is important when talking about the design of enactive systems, and we take on the view of the Embodied Cognition (EC) theory, as it considers the cognitive system to be a network composed of the environment, the body and the brain [4].

Hence, in this paper we explore the possibilities brought by Brain-Computer Interface (BCI), in terms of non-conscious interaction in an enactive system, and analyzed through a lens based on phenomenology, such as that of enaction [2, 26] and of Embodied Cognition [4]. As the name implies, BCI is the interaction between a person and a computer system using signals from the brain [16]. One way of providing BCI is to capture and record the electrical activity in the brain using electrodes attached to the surface of the head, a process called Electroencephalography (EEG). Until recently, EEG systems were restricted to hospital and laboratories, but now they are available to the general public through consumer-grade EEG devices [17]. Two examples of such technology are the Emotiv EPOC [6] and the Neurosky MindWave [19]. Both devices are capable of providing metrics on two emotional states: attention and meditation, i.e., how much a person is focused and how much she is relaxed. We can relate these metrics to the “arousal” and “pleasure” dimensions of the circumplex model of affect [21]. The values provided by the devices come from interpretations that their proprietary algorithms make of the person’s brain waves. The availability of EEG devices, as well as the simple measures they can provide on a person’s emotional state, make them an interesting option for using BCI in ubiquitous scenarios, or in enactive systems.

One major challenge that needs to be overcome by BCI technology is personalization [16]. This entails, for instance, adapting the system’s algorithms to each person’s individual brain waves, considering external factors such as possible distractions, or adapting to the person’s mood on different occasions. Personalization might also be a desirable quality for Universal Design (UD), the approach to design that aims to make interactive products suitable for the widest possible range of users without requiring adaptations [5]. In a context that potentially tends to a variety of user characteristics and requirements – such as pervasive computing – it is crucial to provide usability and accessibility to all of them.

Such is the challenging scenario in which this work is situated. Therefore, in this paper, we will investigate if and how a consumer-grade EEG device, the Neurosky MindWave, can contribute to the design of an enactive system. Moreover, we wish such design to be informed by an enactive perspective, the theoretical basis from which the concept of enactive systems came. So, the paper is organized as follows: in Sect. 2 we present a literature review on BCI, in Sect. 3 we explain what is the enactive perspective, in Sect. 4 we present our case study with the MindWave, in Sect. 5 we discuss the results of the case study and its implications for the design of enactive systems; and in Sect. 6 we give our concluding remarks.

2 Emotion Captured Through EEG Devices

Literature has investigated gaming as a common application for research on EEG devices. For instance, [11] had four people play an audio-only horror game while wearing the Emotiv EPOC on their heads. The ambient sound of the game is meant to cause tension, as well as some of the goals players need to achieve, such as moving unarmed and evading enemies. The game was designed to have an equal number of moments of calm and fear (ten of each), since the author’s goal is to test whether these states can be detected with the EEG device. After statistical analysis of the raw EEG data, the author found indications that it is possible to differentiate states of fear and calm, although more testing is needed to actually prove that. In addition, the author emphasizes that the electrical activity mapped by the EEG is unique for each individual, but some patterns emerged during the analysis.

Also in the gaming context, [12] used a simulation game to test whether the Neurosky MindWave can be used to detect the effects of surprising events on players. To do so, the authors made two versions of the game: one for control and another for experimental conditions. Twenty people played the game, ten for each version. Both versions had a moment for baseline recording – where players were asked to remain calm and inactive for five minutes – and a training phase, to teach the basic controls. The difference between the two versions was in the next phase, where players could either experience seven surprising events (experimental conditions) or regular gameplay, without surprises (control). Then, the final stage of the game is the same for both versions, with three surprising events. Results indicated it is possible to detect the effects of surprise using MindWave and that, furthermore, players from the experimental conditions group were more relaxed when they encountered the surprises on the final phase than the players from the control group.

Still in gaming context, [3] investigated if video game events can cause changes in player’s emotions. They used the Emotiv EPOC in an experiment where twenty people played one of three different commercial games, each from a distinct genre: racing, shooting and pool. For each game, the authors established which kinds of events caused either frustration or excitement, the two emotions chosen for the study. The events were manually annotated by researchers, by watching video footage of participants playing the games. The authors used the Emotiv API, which measures emotion using a normalized value between 0 and 1. The authors converted this intensity into a time series, so that it would be possible to study its correlation with the game events. Hence, authors used linear regression, and found that (1) emotion peaks occurred about half a minute after game events, and (2) there is a strong correlation between game events and emotion peaks.

Also investigating how to apply BCI devices in games, [7] does it with emphasis on music and sounds. More specifically, the authors explore how to detect emotions elicited by certain sounds, to see if it would be possible to adapt a game’s music according to the player’s mental state. In this investigation, they compare the Emotiv EPOC and the Neurosky MindWave. They concluded that both devices are able to detect the four emotions needed for the experiment (fear, joy, happiness and sadness), despite the MindWave having less sensors. Furthermore, the participants reported they preferred MindWave because it felt more comfortable. The authors also performed an experiment to see if players can consciously create specific music notes using only a BCI device. At first, it was difficult for participants to reproduce notes by only listening to them. The solution authors found was to associate the note with an image and a gesture, which reduced the training time by half.

On a similar fashion, [8] developed a software that allows people to create drawings using the Neurosky MindWave. Artificial Intelligence (AI) algorithms interpret the brain signals, according to brain wave rhythms classifications, such as arousal, anxiety or relaxation. Twenty people experimented the software and, according to the authors, it gave them the opportunity to express their creativity in an unconscious way. After statistical and signal analysis, authors concluded that certain brain wave rhythms, as well as the levels of attention given by MindWave, are only relevant for the creative process of people with arts education.

On the context of education and e-learning, [27] tests whether a person’s levels of attention measured by the Neurosky MindWave change while watching a video and performing a task – counting how many times an event occurs in the video. The authors also test if a distraction within the video can have an effect on the levels of attention. The authors’ final goal is to help improving performance assessment and evaluation for training videos, especially with students in remote locations. Results indicated that there was no significant difference in the levels of attention between participants who counted right and those who counted wrong. Furthermore, there was no significant difference in levels of attention between participants who saw the distraction and those who did not see it.

Finally, on the context of decision-making, [22] executed an experiment with ten participants where the Emotiv EPOC monitors their EEG while they perform a task. The authors’ ultimate goal is to design a BCI system for decision-making. In the experiment, participants had to compare two sets of geometric forms, shown separately, and saying whether they were identical or not. They did this in two stages, each consisting of 56 comparisons. After each stage, participants answered a questionnaire about their feelings during the experiment. In the results analysis, authors did not find a relationship between the participants’ self-reported perceptions and the Emotiv EPOC’s readings of five possible emotions (engagement, frustration, meditation, excitement, and long-term excitement).

In summary, from the selected works we can notice a few trends in the domain of BCI and consumer-grade EEG devices. First, the applications we saw are still on an experimentation stage, and are all for individual use and in a controlled environment. Hence, the matters of a pervasive and personalized BCI have not been worked on yet. Second, most of the works performed some kind of statistical analysis on the EEG data. However, there is not a consensus on the statistical method, even among those that employed the same EEG device. Third, all works selected a few emotions to try to detect and classify in their experiments. This is an indication that emotion is being viewed as a type of information to be processed. In this sense, we can also see that there is not a consensus on the emotions that were selected; each study chose a different set.

These trends identified in the literature point to an open opportunity of investigation with regard to the design of ubiquitous systems using BCI. In this paper we take an approach that encourages a tight coupling between the system and the person using it, thus promoting pervasiveness. This approach does not treat emotion as just information, but instead views it as part of the whole cognitive process. In other words, such approach treats body, mind and computer system as a whole. We detail this approach in the next section.

3 Emotion Through the Lens of Enactive Approaches

An enactive system, as proposed by Kaipainen et al. [14], consists of a “dynamic mind-technology embodiment”, where the interaction is based on involvement of the body without a conscious control of the system, in contrast with the conventional interaction that is totally conscious and oriented by goals. The interface, then, can become implicit to the point of being directly linked to the person’s physiological readings. In this case, Kaipainen et al. [14] relate the concept of enactment to the idea of learning by doing, proposed by Jerome Bruner [2].

Bruner’s idea of learning through action comes from a differentiation of three experiences that happen in the learning process: the action-based (enactive), the image-based (iconic) and the language-based (symbolic). Such separation characterizes how higher-order cognition arises from joining the action of a task with its simple components [9]. This resonates with the idea that metaphoric concepts emerge from basic bodily experiences [10]. These views of the learning process are also compatible with the definition of enaction by Varela et al. [26]: “In a nutshell, the enactive approach consists of two points: (1) perception consists in perceptually guided action and (2) cognitive structures emerge from the recurrent sensorimotor patterns that enable action to be perceptually guided”. Hence, while perception is guided by action, cognitive structures – or higher-order cognition processes – are enacted, thus allowing the action to guide the perception.

This definition of enactive approach is a reflection of what Varela et al. [26] characterize as a shift in cognitive science; one that goes from seeing the world as independent and extrinsic, to viewing the world as inseparable from the processes of self-modification. Furthermore, this shift means looking at cognitive systems not in terms of input and output, but in terms of operational closure. According to the authors, “A system that has operational closure is one in which the results of its processes are those processes themselves”. Hence, such systems are autonomous in that they are defined by internal mechanisms of self-organization, not in a way that represents a detached world, but in a manner that enacts a domain that is inseparable from the embodied cognitive system.

Autonomy, however, cannot be defined exclusively by internal processes that recursively depend on each other. According to Thompson and Stapleton [25], an autonomous system – such as the human cognition – also has to regulate its interactions with the world, i.e., its network of internal processes needs to be thermodynamically open. Having this active regulation is what characterizes the adaptive autonomy that is necessary for sense-making, which, in turn, is the behavior the system adopts according to the significance and value that it gives to its current environment. Furthermore, such norms the system places on the outside world are not predetermined or fixed, but enacted by the system through its autonomy. Therefore, in the same way that the two points of the enactive approach described by Varela et al. [26] are interdependent, autonomy and sense-making also feed one another.

In essence, sense-making is the reasoning behind motivated action, which is a form of self-regulation, especially if it involves affect. Hence, the enactive approach sees that sense-making is as much about cognition as it is about emotion [25]. Moreover, in the same way that the cognitive system is not seen as simply input and output, emotion is not looked at as a type of information, to be transmitted back and forth from a person to a computer system. It is in this sense that Boehner et al. [1] propose an interactional approach to emotion, instead of an informational one.

The interactional approach “sees emotions as culturally grounded, dynamically experienced, and to some degree constructed in action and interaction” [1], which is a vision compatible with the enactive approach. Furthermore, in terms of computer systems, the interactional approach shifts the focus “from helping computers to better understand human emotion to helping people to understand and experience their own emotions”. In turn, this implies that computer systems designed with the interactional approach do not aim to guess the correct emotions people are feeling, but instead, their goal is to encourage individual or collective awareness and reflection on the emotions that were evoked during interaction. This way, feelings are not pre-existing facts, but something that develops with conversations and interactions, where an initially vague, ambiguous or even confusing sensation may consolidate into a meaning. Again, this is in accordance with the enactive approach and with Bruner’s [2] idea of learning by doing.

In this sense, although Kaipainen et al. [14] relate their vision of an enactive system with Bruner’s theory, the minimalist example they provide seems to be inclined towards the informational view of emotion. The enactive system they describe consists of sensors that make psycho-physiological readings, which, in turn, are interpreted by the computer to determine the user’s emotional state from a possible set of emotions. Then, a computer-generated character changes its facial expression to match the user’s interpreted emotion. Finally, this change should cause a reaction in the user, which would reflect on the psycho-physiological readings, closing a feedback loop that can be infinite. In terms of the enactive perspective we have presented so far, this example seems off due to how it treats emotion as information, but in a way it also can bring a person to have awareness and reflect upon her own emotions. Hence, looking at the enactive system in terms of autonomy, it has operational closure because of its internal feedback loop, but its internal processes are not thermodynamically open. In order for that to happen, they would have to somehow regulate their interactions with the outside world. One way of doing that would be to allow the meanings of emotions to emerge from interaction, instead of encoding them into specific patterns. For instance, Boehner et al. [1] present as an example of an interactional approach a system called “Affector” [23]. It consists of two video windows on each side of adjoining offices, each displaying real-time footage of the neighbor’s office. The video, however, is distorted based on filters defined by the users according to what they feel is the affective mood of the office. In this example, the feedback loop between person and video represents the operational closure, while the distortion filters the user can apply to the video serve as self-regulation mechanisms, thus providing the thermodynamic openness and, consequently, sense-making.

Expanding this discussion to what we found in the previous section, we can see that, since most works focus on interpreting the EEG data, the trend in literature is also on the operational closure. Furthermore, since most systems we found were for individual use and on controlled environments, there is little room for sense-making, especially for the co-construction of meaning for the emotions that arise during the experiments. Bearing this in mind, in the next section we present our case study, where we take these experimental conditions found in literature as the starting point to our goal: an enactive system that follows the enactive perspective by providing both autonomy and sense-making.

4 Case Study

The object of our case study is the use of a consumer-grade EEG device in experimental conditions and with a single user at a time, following the trend found in literature. Our goal is to design an enactive system using the enactive perspective presented in the previous section. Therefore, we aim to see how far we can go with the EEG device as a starting point.

4.1 Technical Setup

The technical setup for our case study is twofold: the EEG device and the software which participants interacted with during the experiment.

EEG Device. In this study, we adopted a consumer-grade, non-invasive EEG device called MindWave, from Neurosky [19]. It is a brainwave sensing headset that has a single dry sensor the user places on the forehead. MindWave can communicate with the computer or a smartphone through Bluetooth, and can provide the following outputs: Attention value, Meditation value, brainwave band powers (e.g. delta, theta, alpha, beta, gamma), and raw EEG wave samples at 512 Hz. We chose to work with the two first outputs, Attention and Meditation. They are calculated by the device’s proprietary algorithm, called eSense, which returns a value on a scale that goes from 0 to 100. According to the Neurosky developer documentation [18], the eSense scale has a meaning according to five different ranges, that indicate the current level of Attention or Meditation: from 1 to 20 it means a “strongly lowered”; from 20 to 40 it means “reduced”; from 40 to 60 it is “neutral” (baseline); from 60 to 80 it means “slightly elevated”; finally, from 80 to 100 it is “elevated”. The meter value of 0 indicates the calculation is not being performed, probably due to poor reading of the signal. The scale with all this information is represented in Fig. 1.

Fig. 1.
figure 1

Neurosky’s eSense scale for both Mediation and Attention levels, based on developer documentation.

The developer documentation [18] also highlights how these ranges are relatively wide because the eSense algorithm has dynamic learning, so it sometimes adjusts to fluctuations that occur normally with EEG readings, and are particular to each person. Neurosky affirms this is what allows the device to work with a variety of personal and environmental conditions, maintaining reliable and accurate results. They also encourage developers to fine tune their use of the ranges according to the needs of the application; e.g. trigger an output only for values above 60.

On one hand, the eSense level of Attention indicates the magnitude of the person’s mental focus, like the one that occurs during intense concentration. Factors that can bring it down are distractions, anxiety or wandering thoughts. On the other hand, the eSense level of Meditation corresponds to the mental calmness or relaxation, so simply relaxing the muscles of the body might not result in immediate rise in the Meditation level, although relaxing the body can help in relaxing the mind as well. In addition, closing one’s eyes might be an effective method for increasing the Meditation level, since it turns off the mental activities that process images from the eyes. Factors that can lower the Meditation levels are the same that lower Attention levels, plus agitation and sensory stimulation.

Software: Quiz Game. The software consists of a quiz-like game, with a total of twelve Yes/No questions, taken from the appendix of the study of Sparrow et al. [24]. We took six questions the authors classified as easy (e.g. “Are dinosaurs extinct?”), and six that were considered hard (e.g. “Do insects feel hunger?”). The software was developed using the Scratch [20] programming language, because it was easy to integrate with the MindWave device, and it allowed us to program the software rather quickly.

The ultimate goal of the quiz is to detect whether relaxing and disturbing images can have an effect on the levels of Attention and Meditation captured by the MindWave. In this sense, the idea for the interface was to maintain the player’s focus on the images, so other visual elements were kept to a minimum. In order to do that, the questions were read by a synthesized voice, and no text was displayed. The player only had three options of buttons: “Yes”, “No”, and a button to repeat the question. Figure 2 shows an example of the interface, displaying a relaxing image – the picture of a puppy.

Fig. 2.
figure 2

The minimalist interface of our quiz, showing a relaxing image.

The quiz is divided into three moments, each containing four questions. In the first moment, the player can only see the three buttons on a white background. After the player answers the fourth question, s/he enters the second moment, where each question has a different disturbing image as a background. Finally, after the player answers the eighth question, s/he goes into the third moment, where each question has a relaxing background.

The four disturbing images we chose were the following: the Napalm girl from the Vietnam war, three bare-chested starved children, a Somalian adolescent holding a rifle, and the explosion on the World Trade Center from the 9–11 plane crash. In turn, the relaxing images were these: a sleeping kitten, a puppy, reclining chairs in front of an ocean view, and a colorful sunny beach with a hammock attached to a palm tree.

Table 1. Questions from the quiz, with their corresponding answer, difficulty and set.

4.2 Design of the Experiment

For every participant, the images always appear in the same order, although the order of the questions can change. As shown in Table 1, the twelve questions are distributed between three sets: A, B and C, where each set contains two easy and two hard questions. The sets are used to organize the permutations that can be applied during the experiment. These permutations are the following: ABC, BCA and CAB. In other words, when the first permutation was active, the participant experienced the questions from group A with the white background, then the questions from group B with the disturbing images, and, finally, the questions from group C with the relaxing images. Within the groups, the order is never altered, i.e., no matter the permutation, the questions from group A always appear in the order shown in Table 1. The software has a configuration screen where the researcher can choose between the three permutations before the participant starts answering the quiz. This was made to add a bit of randomness to the order of the questions.

The experiment with our quiz and the EEG device MindWave was designed to be within-group, i.e., all participants experience the same conditions. The experiment was performed during a class of a 1-semester Human Factors course, and 16 students were present on the day of the experiment. In the classroom, one by one, students went to where the setup for the experiment was located: the MindWave device, a headphone, and a chair in front of the table with the laptop that was running the software. Before calling a participant, the researcher cleaned MindWave’s forehead sensor, and selected one of the question permutations in the software. After the participant was called, the researcher helped with placing the headphones and the MindWave, which s/he wore throughout the entire quiz.

During the semester, the students were learning how to plan and execute formal experiments in the context of Human-Computer Interaction (HCI) [15], so this experience was presented to them as an example. Hence, instead of acting only as participants, students were also asked to act as observers after participating in the experiment, paying special attention to the body language of the current participant. Along with explanations about the workings of MindWave, this was the only instruction they received before the experiment started; details about the software were kept a secret, to maintain the surprise once they saw the images. In addition, the use of headphones was intended to keep the questions a secret as well, since they were only presented in audio format.

Another intentional design choice was allowing the player to answer only “Yes” or “No” in the quiz. This way, they have to guess, and cannot, for instance, skip a question. In addition, the software also does not provide feedback on whether the selected answer was right or wrong. This decision intended to minimize distractions.

After each participant completed the quiz, they were given a form with questions about the experiment, and also with a space for them to write their observations of other participants. The questions they had to answer were the following: (1) Did you feel an impact seeing the disturbing images?; (2) Which image shocked you the most?; (3) Did you feel an effect seeing the relaxing images?; (4) Which image relaxed you the most?.

At the end of the experiment, we also conducted a debriefing session to gather their oral impressions about the experiment. Therefore, we gathered both quantitative and qualitative data. Quantitative data consisted of the measures of the attention and meditation levels per second, gathered automatically by the software. Qualitative data, then, consisted of the answers from the forms, the ideas from the debriefing, and the written observations made by the students and by another researcher.

Finally, the independent variables of our experiment are the difficulty of the questions (easy or hard), and the background during the quiz (white, disturbing image, or relaxing image). In turn, our dependent variables are the attention, and meditation levels.

Furthermore, our null hypotheses are the following:

  • H0A: There is no significant difference, in terms of attention level, between seeing a white background and seeing an image.

  • H0B: There is no significant difference, in terms of attention level, between answering an easy question and answering a hard question.

  • H0C: There is no significant difference, in terms of meditation level, between seeing a white background and seeing an image.

  • H0D: There is no significant difference, in terms of meditation level, between answering an easy question and answering a hard question.

4.3 Quantitative Results

The first step was trying to reject the null hypotheses. To do so, we had to calculate the average levels of Attention and Meditation for each participant, so that we could then apply a T-Test. The calculated averages are shown in Table 2.

Table 2. Average levels of Attention (AT) and Meditation (MD) for each participant in the different types of images and question difficulties.

It is important to note that there was some problem with the MindWave readings for participant P3, so such data could not be considered in the analysis. The next step, then, was trying to reject the null hypotheses related to the levels of Attention, H0A and H0B. The T-Test for comparison between the “White” and the “Disturbing” columns returned a P=0,73. Between “White” and “Relaxing”, the result was P=0,86. Finally, for the “Disturbing” and “Relaxing” columns, the test returned P=0,35. Therefore, we cannot reject the null hypothesis H0A. Then, applying the T-Test to the samples from columns “Easy” and “Hard” returned a value of P=0,49. Hence, we also cannot reject null hypothesis H0B.

Lastly for our quantitative analysis, we tried to reject null hypotheses H0C and H0D, related to the levels of Meditation. Between columns “White” and “Disturbing”, the test returned P=0,44. Comparing the samples from the “White” and the “Disturbing” columns, the result was P=0,18. Finally, between the “Disturbing” and “Relaxing” columns the T-Test returned P=0,86. Therefore, we cannot reject the null hypothesis H0C. The final T-Test, comparing the samples from columns “Easy” and “Hard”, returned P=0,93, which also means we cannot reject the null hypothesis H0D.

Discussion of Quantitative Results. These quantitative results did not allow us to find statistically significant differences in the data collected from our experiment. This could be due to a number of factors, starting with the eSense algorithm. Since it is programmed to automatically adjust to fluctuations that occur in the EEG readings, such adjustment might not be, for instance, quick enough to adapt to sudden changes. During our experiment, the time participants spent on each question was relatively small: usually no more than five seconds. As reported by [3], emotion peaks can occur about half-minute after the event that triggered them. Therefore, it is possible that the EEG device was not able to detect in time the emotional reactions participants experienced, although these experiences in fact existed according to our qualitative data.

Another reason can be provided by looking at the works of [27] and [22]. The first found there was no significant difference between attention levels in people who performed a task wrong and those who performed it right. The other work reported finding no relation between the self-reported perceptions of emotions, and the EEG device’s readings. These two works are examples of how the data from an EEG device might differ from the results we actually perceive.

4.4 Qualitative Results

First, we will look at the results from the post-experiment questionnaire. For the first question, “Did you feel an impact seeing the disturbing images?”, of the sixteen participants, twelve answered they did feel an impact. Most reported they felt the image distracted them enough to cause difficulty in answering the question; some even highlighted how distracting it was the fact that the images were not related with the questions. Some participants also reported feelings of surprise from the sudden appearance of the images. From the four participants who said they were not affected by the images, one gave no explanation, two claimed the images were well-known, and one said once s/he realized the images had no relation with the questions, s/he stopped paying attention to them, staying focused on the questions.

For the second question, “Which image shocked you the most?”, twelve of the sixteen participants reported they found the image of the starving children to be the most disturbing. Two recalled the 9–11 image, one mentioned the image of the adolescent holding a rifle, one mentioned the Vietnam girl, and one participant said none of the images was shocking.

For the third question, “Did you feel an effect seeing the relaxing images?”, nine students said they did not feel an effect. Of the other seven participants, one said the relaxing image took her eyes away from the answer buttons, where they were to get away from the disturbing images. Another participant said she felt “peace and joy”. One student said she perhaps felt relief, and that the images seemed less distracting than the disturbing ones, but maybe not relaxing. Another participant reported thinking “Wow, that’s nice!”, but then turned the focus back to the questions. Lastly, one participant said that the kitten made her smile a little.

For the last question, “Which image relaxed you the most?”, nine participants reported not remembering any specific image. Interestingly, all but one of them remembered a specific disturbing image. Of the remaining seven participants, one said the puppy was the most relaxing image, three said it was the beach, and three said it was the kitten.

Regarding their observations of their colleagues’ body language, there were interesting results. Despite receiving the same instructions, each participant had his/her own ways of interpreting their colleagues’ gestures. On the one hand, some reported literal body language, like: moving fingers and feet, raise eyebrows, look up, move shoulders or head, intensity of blinks (quick, long or none), hand on chin, swallow, look away, dilated pupils, scratching, crossing legs, and beating on the table. On the other hand, there were observations associating direct meaning to their colleagues’ expression: peaceful, “good expression”, doubt, tension, discontentment, upset, nervous, uncomfortable, and indifferent. There were also some cases of a middle-ground, such as: “I don’t know (eyes and mouth)”, “whatever (shoulders)”, “mocking laughter”, and “signaling doubt with the lips”.

Finally, on the debriefing session, participants gave good insights about the experiment. They pointed how the fact of knowing you are being observed is a possible bias; a few even admitted they tried to restrain their body language. Another bias could be of participants answering the questions quickly just to get over with the quiz as soon as possible. Regarding the questions, some said they had difficulty paying attention to the audio. They said the synthesized voice does not cause emotional interference, but its pronunciation can be confusing. Regarding body language, the students highlighted how they saw some people moved parts of their bodies when there were disturbing images, and how some participants tried to hide their reactions, for instance by putting their hand on their faces. They also recalled there were people who would look away from the screen to think. The students also felt that the relaxing images were easier to ignore, and a lot of them admitted the could not remember most of the images, or even of the questions from the quiz. Finally, they suggested improvements such as: changing the order of the images, giving a small pause between the questions, displaying the images on a larger screen to raise the impact, providing a more immersive atmosphere through lighting or sounds, displaying animated images, and making the “Yes” and “No” buttons appear with a delay, since their color is distracting.

Discussion of Qualitative Results. Georgiadis et al. [12] reported how people who encountered surprises earlier were more relaxed when they encountered later surprises than those who only experienced one surprising event. Our quantitative and qualitative data also corroborate this effect, since the disturbing images – which appeared first – were very striking for most participants, while the relaxing images – which came afterwards – were usually ignored or not easily remembered. Hence, during our experiment participants could have experienced some sort of numbness that prevented MindWave from detecting emotional reactions. Furthermore, the lack of correlation between self-reported emotions and the EEG readings found by Schuh & de Borba Campos [22] are very similar to ours, since our quantitative data did not provide insights that were present in our qualitative data; e.g., the impact the disturbing images had on the participant’s concentration.

In fact, it is important to note how much richer the qualitative data was than the quantitative data. While the MindWave only measures levels of attention and meditation, the observations elicited a much wider variety of emotions, like peacefulness, doubt, tension, discontentment, and indifference. This is coherent with the interactional approach [1], which views emotion as much more than information. The way the participants interpreted each other’s emotions, based only on body language, is a step towards emotion as a cultural, social and collaborative construction, like Boehner et al. talked about.

5 Discussion Towards an Enactive Scenario

Based on our analysis of the quantitative and qualitative results, we propose that, to follow the interactional approach, our quiz would have to harbor the kind of social meaning-making the students showed while observing each other. Watching another person play the quiz can lead to reflections on what that person might be feeling and what the images might be triggering for her, which, in turn, can lead to a self-reflection about one’s own feelings when presented with the same experiences. Like the “Affector” example [23], our quiz could provide some sort of real-time output of how a player is feeling – like a video footage, or even the EEG reading – and allow other players to transform that output according to their own interpretations of it.

Providing a mechanism such as this would be a way to make our quiz thermodynamically open. For it to have autonomy, however, its internal processes would have to be recursively interdependent – which, in the current state, they are not. A way to do that would be to incorporate feedback loops, similar to what Kaipainen et al. [14] propose. On an individual level, we could make the quiz environment responsive to the player’s EEG readings. For instance, if the readings indicate a high level of Meditation, an agitated music could play on the background, the ambient lighting could glow in warm colors, and the computer monitor could display disturbing or distracting images. If the Meditation levels went down, then calm music would play, ambient lights would glow in cold colors, and the displayed images would be comforting or relaxing. On a social level, we could make it so that it is not the current player’s EEG that is affecting his environment, but someone else’s. This way, players feed each other’s environments, which could lead to co-construction of meaning if players are aware of whose emotions is affecting their environment. Again, a real-time video footage of the person, or some representation of her EEG data would suffice, as long as the interpretation of that data is left open-ended.

Such flexibility is important not only to allow sense-making to occur, but also because EEG readings are unique for each individual, as noted by Garner [11]. Therefore, if we are envisioning a pervasive system that responds to non-conscious control, it is beneficial to consider individual differences. In this sense, an approach like Universal Design is essential for a technology paradigm that needs to respond to the presence of different individuals in a seamless and unobtrusive way [5]. Considering EEG readings are so particular, it would be impossible to create one solution, based exclusively on them, that contemplates every user – a fact reinforced by how our quantitative analysis found no correlations. However, enactive systems, if designed with the enactive perspective we presented, have the potential to contemplate a wide variety of users, especially with the social component that emerged from our qualitative results. For instance, in our examples in which one person’s emotional state affects another person’s experience, as long as each one can develop their own sense-making of the other’s situation, they are communicating with each other in an universal way. The ambient lights, the sounds, and the images, all embedded in the player’s environment and making use of multimodality and multimedia, tend to a wide range of human abilities, skills and preferences.

6 Conclusion

In this paper we investigated how the MindWave EEG device can potentially contribute to the design of enactive systems, a concept of dynamic coupling between mind and technology. In our literature review, we saw how the use of EEG devices is still experimental, and meant for individual use in controlled environments. We also saw a focus on statistical analysis of the EEG data and on classification of emotions. We took these trends as a starting point to our case study, which involved an experiment that tested whether MindWave could detect emotional reactions from the participants in specified situations. Although our quantitative data did not allow us to make correlations between the experimental events and the EEG readings, our qualitative data showed to be quite rich. In particular, once we looked at it using the lens of the enactive perspective, we found significant contributions that could elevate our experimental setup to an enactive system. The concepts of autonomy and sense-making were crucial for this process, since they provided us with a scaffold to look at how the interactions with the system could be more pervasive and less goal-oriented.

In this sense, the social component emerged as an important factor not only for co-constructing emotions, but also for tackling the problem of personalization. Pervasive or ubiquitous computing needs to reach the widest possible range of users, without the need for special adaptations. Universal Design, then, is almost a necessity, and we believe enactive systems, with the enactive approach, are a viable path towards it.