Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag November 27, 2021

Interactions with Artificial Entities Reloaded: 20 Years of Research from a Social Psychological Perspective

  • Nicole Krämer

    Nicole Krämer is Full Professor of Social Psychology, Media and Communication at the University of Duisburg-Essen, Germany. She completed her PhD in Psychology at the University of Cologne, Germany, in 2001 and received the venia legendi for psychology in 2006. Dr. Krämer’s research focuses on social psychological aspects of human-machine-interaction (especially social effects of robots and virtual agents) and computer-mediated-communication (CMC). She investigates processes of information selection, opinion building, and relationship maintenance of people communicating via Internet, especially via social networking sites. She heads numerous projects that received third party funding. She served as Editor-in-Chief of the Journal of Media Psychology 2015–2017 and currently is Associate Editor of the Journal of Computer Mediated Communication (JCMC).

    EMAIL logo
    and Gary Bente

    Gary Bente is a professor in the Department of Communication at Michigan State University (MSU) and the director of the Center for Avatar Research and Immersive Media Applications (CARISMA) at the School of Communication Arts and Sciences at MSU. He has pioneered the use of VR in nonverbal communication research and focuses on the study of social affordances and emotion perception in shared virtual environments, with an emphasis on bio-behavioral methods. He has co-edited the German Textbook in Media Psychology and published over 250 peer reviewed journal articles in the area of interpersonal communication and social cognition, media effects and psychophysiological research methods.

From the journal i-com

Abstract

Twenty years ago, we reflected on the potential of psychological research in the area of embodied conversational agents and systematized the variables that need to be considered in empirical studies. We gave an outlook on potential and necessary research by taking into account the independent variables behavior and appearance of the embodied agent, by referring to the dependent variables acceptance, efficiency and effects on behavior and summarizing moderating variables such as task and individual differences. Twenty years later, we now give an account on what has been found and how the field has developed – suggesting avenues for future research.

1 20 Years Ago: Claims and Pleads

At the turn of the millennium, embodied interface agents were hailed as a new form of technology interface that will facilitate future human-computer-interaction [5], [33]. High expectations were raised concerning the animated, human-like figures which were developed to enable speech-based, natural dialogue with the human user. Not only were higher naturalness of the interaction part of the promise but also higher efficiency and acceptance, intuitive interaction, as well as overcoming of anxiety towards technology was suggested. In this line, Takeuchi and Naito [59, p. 454] stated optimistically that the new interface agents would revolutionize the field of human-technology-interaction: “We surmise that once people are accustomed to synthesized faces, performance becomes more efficient, and a long partnership further improves performance. Human-like characterization is one good form of autonomous agents, because people are accustomed to interact with other humans”. Now, a quarter of a century later, the dialogues these interface agents are capable of are still far from perfect and instead of being broadly deployed in interactive applications, they are less visible in daily life compared to robots and speech assistants. Still, these new interaction forms paved a way for a research agenda that is still expanded on in the research realms of robots and speech assistants: Early research on talking computers and embodied interface agents demonstrated that effects can be inherently social – even without or with imperfect embodiment [21], [45]. This has from early on lead to the necessity to not only address usability aspects but to include a genuine psychological perspective in order to better understand the social reactions which were observed in numerous studies. Parise et al. [43] had recognized this as one of the first research groups and aptly described that the research needs to be enhanced: “As computer interfaces can display more life-like qualities such as speech output and personable characters or agents, it becomes important to understand and assess user’s interaction behavior within a social interaction framework rather than only a narrower machine interaction one” (S. 123).

And indeed, for the first time, results from social psychology, communication science and communication psychology and, for instance, research from the area of nonverbal communication became relevant in the research realm of human-technology interaction. Within the last 20 years, numerous studies employed this perspective and presented insights that enhanced the community’s understanding about the effects of humanoid interfaces.

In a 2001 i-com paper [23], we tried to systematize the realm of psychological research on embodied interface agents by differentiating and detailing independent variables (behavior and appearance), dependent variables (acceptance, efficiency and user behavior (i. e., social effects)) as well as moderating context variables (task and interindividual differences/person variables). Twenty years later, we now aim to assess what developments have been taken and which insights empirical research has yielded. Against this background, the goal of the current review is to a) revisit developments in the area of interactive, humanoid interface agents, b) discuss the extent to which psychological knowledge and research has indeed influenced the research realm and c) summarize relevant findings regarding the different factors and variables we distinguished 20 years ago.

2 Recent Technological Developments and State of the Art in 2021

In order to be able to evaluate the state of research more than 20 years after research on embodied interface agents started, it first needs to be discussed which technological developments happened and which applications actually made it to real life availability.

Due to the development of new technologies, the research area in the year 2021 is much broader compared to 2001. Regarding applications, embodied conversational agents nowadays appear first and foremost in the form of chat bots which, for example, inform users on entertainment or e-commerce websites [34]. Shortly after the first embodied agents were introduced, also research on social robots which engage in interaction with the human user, was intensified. Here, real-life applications such as shopping robots [9] or care for the elderly [6] are common. In the meantime, another form of natural interaction with technology advanced to be the most disseminated real-life application: Speech assistants like Alexa, Siri or Google Assistant are not embodied but conduct small dialogues related to the users’ wishes and Internet requests. While these are not human-like in the sense that they are embodied, from a psychological perspective they share many similarities with embodied agents and robots. This is highly plausible as earlier research has demonstrated in numerous studies that the effects of embodied agents can largely be traced back to their speech [39].

Despite a potential similarity in their effects, the different forms of human-technology interaction might still lead to specific outcomes. Here, especially research on the differences between embodied agents and robots has been conducted in order to scrutinize whether the physical presence of robots brings about distinct effects. To this aim, Hoffmann and Krämer [14] compared the persuasive effects of a robot and its virtual counterpart in two different situations. For scenarios in which physical manipulation is necessary at least on the side of the user, robots seem to be beneficial because they share the space of reference. For scenarios that include purely informational tasks that build on persuasion, no difference between the two forms emerged. Therefore, the study underlined the importance of the consideration of different contexts (i. e. task or interaction scenario) while analyzing the impact of different embodiments. Whether social effects (like persuasion by an artificial companion) can be observed will therefore depend not only on the form of embodiment alone but also on the appropriateness of the specific embodiment for the specific task or scenario (see [29]).

Although not explicitly tested in direct comparison, other results suggest that robots might lead to more emotional reactions (see [16]). A study in which the robot either objected or did not object with an emotional phrase against its switching off showed that the robot’s unexpectedly human-like behavior had a surprisingly strong impact on the participants. Instead of dismissing the objection to be switched off as weird for a machine, they were largely affected emotionally. Similarly, Rosenthal-von der Pütten et al. [49] demonstrated that humans are emotionally affected when they see that a robot is being “tortured”. Here, the reactions are also documented on a neural level: in an fMRI study, participants were confronted with videos showing a human, the toy dinosaur robot “Pleo” and an inanimate object (a green box), being treated in either an affectionate (e. g., caressing the skin) or in a violent way (e. g., hitting, being choked). Self-reported emotional states and functional imaging data revealed that participants indeed reacted emotionally when seeing the affectionate and violent videos. Overall, the patterns were similar for robot and human and differed from people’s reactions towards watching the box being caressed or tortured. Although it has not been tested, the same kind of reactions are not to be expected for interface agents.

Apart from the developments regarding humanoid interfaces, a new line of autonomously acting, somewhat intelligent technology has spread in numerous aspects of everyday life. Machine learning helped develop intelligent algorithms that acquire more and more agency – they act with increasing intelligence and autonomy and suggest decisions in applied areas (such as medical decision making, supporting hiring decisions, recommender systems etc.). Here, interactions are not necessarily based on natural dialogue but – due to the abilities of the machine – are increasingly comparable to the interactions people have with human co-workers, assistants or advisors. The psychological implications of the fact that (non-human-like) technology now acts like an agent are aptly described in [57].

3 Role of Psychological Findings and Research

One of the most important goals of our early papers on embodied conversational agents such as the 2001 i-com article was to demonstrate the potential psychological research has in the area of intelligent agents. In other articles during these early times of the research area [2], we distinguished between fundamental research (employing virtual agents to yield insights on crucial aspects of human nature and communication), realisation research (contributions to advancement of embodied conversational agents by providing basic knowledge on psychological mechanisms) and evaluation research (analysis of effects and acceptance of embodied conversational agents). Our vision at that time was that more and more psychologists would enter the research field and contribute knowledge as well as expertise on empirical research in order to advance the field of human-technology-interaction as a whole. However, this has not happened. Instead, psychological knowledge and methods have increasingly been adopted by computer scientists.

In the area of realisation research, Marsella and Gratch [35] used psychological theories on emotion to build the EMA model that has since been employed in order to implement human-like emotional states in embodied agents. Other computer science researchers have excelled in applying psychological research methods. When looking at the contribution of the field’s most important conference, the annual Intelligent Virtual Agents conference, a large part of the articles describes thorough empirical studies – most of them conducted by computer scientists.

This does not mean that psychologists are now dispensable in the field, but they might have to reconsider their role. Direct interdisciplinary collaborations should become more important – not only with computer scientists where psychological knowledge can directly be transferred to computer science models and implementations [19] but also with scholars from the area of ethics (for an example see [28]). While psychologists can help to understand which effects can be expected when conversational agents are built in specific ways, they usually refrain from taking a normative stance – which is why a collaboration with ethics scholars is beneficial.

However, in total, the conclusion after 20 years in the research field is: psychological knowledge and research was and still is important in order to understand and advance the effects of embodied agents and related artificial entities. But while psychology is indispensable, psychologists are not.

4 Summary of Relevant Findings

The core of the 2001 i-com paper was to give an overview on relevant variables that need to be addressed when researching the effects of embodied agents. We distinguished between independent variables (i. e., attributes of the embodied agent such as its behavior and appearance), dependent variables (acceptance, efficiency and user behavior) as well as moderating context variables (task and interindividual differences/person perception). These variables still seem to be helpful in structuring the research area and were the basis for two more recent overviews on research in the area [28], [29]. In these reviews, we collected findings in the tradition of the 2001 i-com paper as there is indeed recent research on nearly every variable we identified in 2001. In the following, we again summarize the state of the art on these factors.

4.1 Independent Variables (Behavior and Appearance)

In the 2001 paper [23] we argued that research on embodied agents will only make progress when agent variables are varied systematically in order to understand their influence on the human user in a controlled way. As most important variables, we identified behavior and appearance. When comparing research on behavior and appearance of embodied agents of the last 20 years, behavior has proven to be indeed very important, while appearance seems to be less influential.

4.1.1 Behavior

We will reiterate findings on both, communicative behavior and nonverbal behavior of embodied agents (see summary in [28], [29]). With regard to communicative behavior, results by Rickenberg and Reeves [46] which we already cited in the 2001 i-com article still intriguingly demonstrate that the (communicative) behavior of an agent is decisive. They demonstrated that whether a virtual character on a website monitored the user or ignored him/her had an impact on the user’s perceived anxiety and conclude that it is not sufficient “to focus on whether or not an animated character is present. Rather the ultimate evaluation is similar to those for real people, it depends on what the character does, what it says and how it presents itself” (p. 55). Indeed, more recent studies indicate that, for example, the quantity of the agent’s communicative utterances is influential. It was found that an interview agent’s self-disclosure (quality of utterances) lead only to minor effects, while the agent’s verboseness (quantity of utterances) affected both the participants’ verbal behavior (with regard to word usage and intimacy of answers) and their perception of the interview [62]. Participants more often disclosed specific embarrassing situations, their biggest disappointment and what they feel guilty about to the agent regardless of its previous self-disclosure. Moreover, when the agent was more talkative it was generally evaluated more positively and the interview was perceived as being more pleasant. It can therefore be assumed that talkativeness led to a more favorable evaluation by the users and subsequently facilitated self-disclosure and thereby social reactions.

There has been more research on nonverbal behaviors of embodied agents than on verbal behavior. This is in fact plausible given that virtual embodiment not only enables the use of both human-like body language (gestures, smiles, expressions) as also unique robot and machine behavior (e. g., eyes blinking in different colors; [47]). Krämer and Manzeschke [28] summarize the relevant finding as follows: With regard to gestures, Krämer, Simons, and Kopp [31] demonstrated that when the intelligent virtual agent Max showed self-touching gestures (e. g., touching its arm or face) this had positive effects on the experiences and evaluations of the user, whereas eyebrow-raising evoked less positive experiences and evaluations in contrast to no eyebrow-raising. Based on the notion that gestures influence the perception of competence [36], further research showed that when manipulating extensiveness of gesture usage and gender of a leader a positive impact of extensive nonverbal behavior is observable. Participants were more willing to hire the virtual person who used hand and arm gestures than the more rigid person. The virtual person using gestures was also perceived as exhibiting more leadership skills and general competence than the person in the non-gesture condition [18]. The effects and efficiency of nonverbal behavior has especially been debated in the area of pedagogical agents. Here, researchers have discussed whether nonverbal behavior is decisive for learning experiences and that it therefore needs to be implemented in pedagogical agents. Baylor and Ryu [1], for example, suggest that the key advantage of embodied pedagogical agents is that human-likeness creates more positive learning experiences and provides a strong motivating effect. However, Rajan et al. [44] demonstrated that it is first and foremost the voice that is responsible for these effects. Moreno [37] further summarized that – in line with results that especially the voice is decisive – there is no evidence for the social cue hypothesis as it has not been shown that the mere presence of social aspects such as a human-like body leads to distinct effects. However, the cognitive guiding functions provided by vocalizations and a program’s didactic concept proved to be influential. Also, more recent research [4] as well as a meta-analysis [51] have supported the notion that voice is more important than nonverbal expressiveness. Still, as Krämer [22] argues, these results have to be considered with caution given the fact that the systems that had been evaluated did not (yet) include very sophisticated nonverbal behavior. Instead, it needs to be considered that nonverbal behavior is very complex: The dynamics of the movements are important and very subtle movements have distinct effects (e. g., head movements such as a head tilt) and the effects are context-dependent (e. g., a smile leads to a different effect when accompanied by a head tilt). This complexity, however, is mostly not implemented in pedagogical agents. So far, only very few pedagogical agent systems have achieved realistic and sufficiently subtle nonverbal behavior in order to administer a fair test. And indeed, when employing technology that provides realistic, dynamic nonverbal behavior, results show that nonverbal rapport behavior leads to an increase in effort and performance [26]. Therefore, the conclusion that embodiment and nonverbal behavior is less decisive compared to voice is premature.

More recently, another form of nonverbal behavior which only robots are capable of has been analysed [13]. In a series of studies, it was investigated whether the positive effects of interpersonal touch are also observable with regard to robot touch. Based on media equation assumptions [45] an experimental study in which a robot either touched or did not touch a human interaction partner revealed positive emotional reactions towards robot-initiated touch as well as increased compliance to the robot’s suggestions [15].

In conclusion, nonverbal behavior has been shown to be influential even when it still needs to be clarified how large its effects are compared to verbal behavior.

4.1.2 Physical Appearance

The effects of physical appearance variables have also been summarized recently [28], [29]. Several studies have indicated that the appearance of a virtual character matters, for example for acceptance and evaluation of the character [8], [60]. In particular, Domagk [8] shows that when the appearance (and voice) is likeable, a pedagogical agent has more positive effects. Building on this, we compared the impact of a virtual tutor depending on its appearance as either a cartoon-like rabbit character or a realistic anthropomorphic agent [53]. Results show that the rabbit-like agent was not only preferred, but people exposed themselves to the tutoring session for longer when the rabbit provided feedback. However, this was not related to an increase in learning performance. Other studies, which focus more on credibility rather than learning and likeability, show that characters which are more anthropomorphic are perceived as more credible [42]. More recent studies with more sophisticated appearances show that different appearances appeal to different groups of users: While students prefer non-human looking agents, specifically elderly users benefit from the social outcomes of a humanoid appearance [54]. In addition, results demonstrate that attractive agents were found to be more likeable and were more persuasive. These effects, however, did not increase in a longterm relationship with an agent.

In sum, there is sufficient evidence to conclude that physical appearance can influence the users’ experience when interacting with artificial entities. Findings also suggest that the impact might be less strong compared to behavioral aspects and therefore less influential than early studies suggested [7]. Also, given that different studies focus on different dimensions of appearance (e. g. realism, anthropomorphism, attractiveness/likeability), it is still difficult to conclude which physical features are decisive. A first attempt of systematizing the area is presented by Straßmann and Krämer [55].

4.2 Dependent Variables (Acceptance, Efficiency and User Behavior (Social Effects))

Twenty years ago, we suggested that the dependent variables acceptance, efficiency and impact on the user behavior would be most important to address. This was based on a distinction of relevant variables by Dehn and van Mulken [7] who distinguished between subjective experience (evaluation of the system), performance (e. g., increasing knowledge or understanding on the part of the user) and behavior when interacting with the system (attention, self presentation). Similarly, but regarding the terminology rather based on classic usability research, Bente and Krämer [2] differentiate acceptance, efficiency and effects on user behavior.

The classic evaluation of the system in the sense of acceptance is rarely addressed in the research realm – probably due to the fact that it is less interesting when conducting fundamental research and trying to scrutinize the (psychological) mechanisms. However, especially when a technology is new, it can be observed that evaluation and acceptance play a role in the research (see research on sex robots, [58]), but apart from this, acceptance has not been a particular focus within research on artificial entities.

The dependent variable efficiency has first and foremost been addressed in research on pedagogical agents (for an overview see [22]). As has already been summarized above, results on the virtual agents’ efficiency are mixed.

Most – and probably most interesting – research has been conducted on the effects of artificial agents on the user’s behavior, specifically with a view to social effects. Early results showed that humanoid features on the screen lead to distinct self-presentation [24], [52], social inhibition effects [46], natural communication on the part of the user [21], as well as mimicry regarding the agent’s smiling behavior [27].

Research that followed these early findings mostly tried to scrutinize the mechanisms behind these social effects. Here, the assumptions of the so-called media equation [45] play a major role. Supporters of the media equation assumption see social reactions to artificial entities as truly social in the sense that “People respond socially and naturally to media even though they believe it is not reasonable to do so, and even though they don’t think that these responses characterize themselves.” [45, p. 7]. Nass and Moon [39] suggest using the term ‘ethopoeia’ as an explanation for this unconscious and automatic behavior (social reaction) which is inconsistent with one’s conscious opinion (computers do not need social treatment). According to this approach, minimal social cues like a human-sounding voice mindlessly (cf. [32]) trigger social responses because humans cannot avoid reacting automatically to social cues. The ethopoeia approach is supported by the fact that participants in the studies of Nass et al. obviously did not consciously recognize their social behaviors and when they were asked in the debriefing, they stated that they did not act socially (e. g. polite) towards the computers and that they believe such behavior to be inappropriate [41].

In our own research, we were especially interested in testing the ethopoeia model and the Threshold Model of Social Influence (TMSI, [3]) model against each other. The major difference is that the TMSI model assumes that there is a fundamental difference between agents and avatars in the sense that users react socially to avatars (i. e., mediated fellow humans) but will only react socially to agents when they show sufficient social cues. The ethopoeia model, on the other hand, assumes that agents will automatically evoke social reactions in the same way as fellow humans do. Therefore, von der Pütten, Krämer, Gratch, and Kang [64] empirically tested the TMSI against the ethopoeia approach. With the aim of testing the aforementioned assumptions, agency and behavioral realism of a virtual agent (the Rapport Agent; [11]) were experimentally manipulated in a 2×2 between-subjects design. Participants were led to believe that they would be interacting either with another participant mediated by a virtual character or with an autonomous computer program. Moreover, the agent with higher behavioral realism featured responsive nonverbal behavior while participants were interacting with the agent, whereas the agent in the low behavioral realism condition showed only idle behavior (breathing, eye blinking), but no responsive behaviors. According to the TMSI, interaction effects between agency and behavioral realism should occur (in the sense that social reactions are observable in both avatar conditions but only in the agent condition with high behavioral realism). However, if the ethopoeia concept in its revised version (which acknowledges that automatic and unconscious social reactions will be stronger if there are more social cues; [38], [40]) is more accurate, social reactions should be reinforced when behavioral realism increases and should be independent of assumed agency. During the interaction, the Rapport Agent asked the participants intimate questions so that self-disclosure behavior of the participants could be used as dependent variable. Additionally, self-report scales to evaluate the virtual character as well as the situation were employed.

The data analyses revealed that the belief of interacting with either an avatar or an agent resulted in barely any differences with regard to the evaluation of the virtual character or behavioral reactions, whereas higher behavioral realism affected both (e. g., participants experienced more feelings of mutual awareness, and they used more words during the interaction when behavioral realism was high). However, no interaction effects of the factors of agency and behavioral realism emerged. Ultimately, since main effects of behavioral realism, but no interaction effects, were found, the results support the Revised Ethopoeia Concept but not the TMSI.

However, it should be noted that a recent meta-analysis by Fox et al. [10] rather provided evidence for the notion that the perception of agency is decisive when interacting with virtual characters. The analysis revealed that, overall, social reactions were stronger when people thought that they were interacting with another human compared to when they believed they were interacting with a computer program. Therefore, the role of agency in terms of the emergence of social reactions is still unclear. In general, however, what can be concluded is that artificial entities that have social cues will elicit social reactions on the part of the user – which can be demonstrated for all forms from speech assistant to virtual agent to humanoid robot.

4.3 Moderating Context Variables (Task and Interindividual Differences/Person Variables)

Although it would certainly be highly beneficial, the influence of the task is only rarely considered in studies on embodied conversational agents or other forms of artificial entities. The rare studies which exist impressively demonstrate that effects of the artificial entity depend on the situation and task (see above, [14]). Here, more research should be conducted in order to better scrutinize this important area.

Regarding interindividual differences, various user attributes have been considered as potential predictors of the reactions towards socially interactive agents, among them gender, age, computer literacy and personality. An overview is given in [28], [29] which is summarized here. Interestingly, research is still largely focussing on the variables which we proposed to be important in 2001 [23].

Gender. Krämer, Hoffmann, and Kopp [25] revealed in their re-analysis of earlier studies that men and women have different preferences with regard to IVAs. In fact, compared to the effects of age and computer literacy, the influence of gender was more important. In one study, women were found to be more nervous during the interaction with the agent, which contradicts the vision that IVAs will facilitate human-computer interaction for these kinds of users. The data suggest further that female users’ interest and acceptance can be increased when nonverbal behaviors are implemented (here: self-touching gestures) and when the agent frequently smiles. Interestingly, Krämer et al. [26] demonstrate with regard to pedagogical agents that nonverbal behaviors which communicate rapport were especially beneficial when displayed by agents of the opposite sex. In sum, it can be concluded that women especially benefit from an increased nonverbal behavior of the agent, in line with the finding that women are more sensitive to nonverbal behaviors [12].

Age. It is important to analyze older users’ reactions as more and more technology is developed to enable ambient assisted living for seniors. Part of this development are also multiple virtual agent applications (see, for example [20]). Although the overall goal is that an IVA leads to a facilitation of human-technology-interactions studies suggest that older persons are more nervous when interacting with an IVA than younger ones [31]. Further results show that empathic nonverbal behavior can be helpful [17]. Interestingly, an agent that behaves in a dominant way leads to more persuasion when interacting with elderly users [50]. With regard to appearance variables, older people seem to prefer more humanoid appearances [55], [56].

Computer literacy. Computer novices proved to be more nervous when interacting with an IVA than other users [25]. This is in line with previous findings that computer laypeople do not benefit from IVAs in the way in which it is typically hoped [30]. Additional research will need to demonstrate under which conditions, non-computer-literate users can be supported in their interactions with IVAs.

Personality. Personality traits (as for example the so called Big Five personality traits, extraversion, neuroticism, conscientiousness, openness and agreeableness) have long been discussed as potential influencing factors in human-agent-interactions. However, the Big Five themselves seem to have only limited exploratory value: Results of a study with the Rapport Agent show that participants’ personality traits affected their subjective feelings after the interaction, as well as their evaluation of the agent and their actual behavior [63]. From the various personality traits, those traits which relate to withstanding behavioral patterns in social contact (agreeableness, extraversion, approach avoidance, self-monitoring sensitivity, shyness, public self-consciousness) were found to be predictive for the positive and negative feelings participants reported after the interaction, the evaluation of the agent, and the amount of words they used during the conversation. However, other personality traits (e. g. openness, neuroticism) as well as gender and age did not affect the evaluation. For instance, the higher one’s rating on extraversion and public self-consciousness, the more words were used. Furthermore, the more shy people are, the more negatively they evaluate the agent, whereas agreeableness increases positive feelings after the interaction.

5 Conclusion

In our 2001 conclusion, we first and foremost stated that first insightful results have been yielded but that much more research needs to follow. We especially criticized that many aspects are only researched in a rather unsystematic way. Now, 20 years later, we can state that definitely numerous results have been presented but that most studies are not connected to a larger and systematic research framework. An exception can be found, for example, in the area of the agents’ appearance where a systematic framework has been proposed [54]. In order to not only conduct and publish “yet another study on embodied conversational agents”, more research frameworks or comprehensive theories would be needed. Those would also need to include the form of the artificial entity, as empirical evidence shows that speech assistants, embodied conversational agents and robots can have distinct effects. Most importantly, future studies should, and very likely will, assess the physiological and neural correlates processes in human agent interactions, in order to better understand the universals and specificities in the perception of real and virtual others [48], [61]. Altogether, it can therefore be stated that there is sufficient potential for extensive research in the next twenty years.

About the authors

Nicole Krämer

Nicole Krämer is Full Professor of Social Psychology, Media and Communication at the University of Duisburg-Essen, Germany. She completed her PhD in Psychology at the University of Cologne, Germany, in 2001 and received the venia legendi for psychology in 2006. Dr. Krämer’s research focuses on social psychological aspects of human-machine-interaction (especially social effects of robots and virtual agents) and computer-mediated-communication (CMC). She investigates processes of information selection, opinion building, and relationship maintenance of people communicating via Internet, especially via social networking sites. She heads numerous projects that received third party funding. She served as Editor-in-Chief of the Journal of Media Psychology 2015–2017 and currently is Associate Editor of the Journal of Computer Mediated Communication (JCMC).

Gary Bente

Gary Bente is a professor in the Department of Communication at Michigan State University (MSU) and the director of the Center for Avatar Research and Immersive Media Applications (CARISMA) at the School of Communication Arts and Sciences at MSU. He has pioneered the use of VR in nonverbal communication research and focuses on the study of social affordances and emotion perception in shared virtual environments, with an emphasis on bio-behavioral methods. He has co-edited the German Textbook in Media Psychology and published over 250 peer reviewed journal articles in the area of interpersonal communication and social cognition, media effects and psychophysiological research methods.

References

[1] Baylor, A. L., & Ryu, J. (2003). The effects of image and animation in enhancing pedagogical agent persona. Journal of Educational Computing Research, 28(4), 373–394. https://doi.org/10.2190/V0WQ-NWGN-JB54-FAT4.10.2190/V0WQ-NWGN-JB54-FAT4Search in Google Scholar

[2] Bente, G., & Krämer, N. C. (2001). Psychologische Aspekte bei der Implementierung und Evaluierung nonverbal agierender Interface-Agenten. In H. Oberquelle, R. Oppermann, J. Krause (Eds.), Mensch und Computer (pp. 275–285). Vieweg+Teubner Verlag. https://doi.org/10.1007/978-3-322-80108-1_29.10.1007/978-3-322-80108-1_29Search in Google Scholar

[3] Blascovich, J. (2002, September 30–October 2). A theoretical model of social influence for increasing the utility of collaborative virtual environments. In Proceedings of the 4th International Conference on Collaborative Virtual Environments (pp. 25–30), Bonn, Germany. https://doi.org/10.1145/571878.571883.10.1145/571878.571883Search in Google Scholar

[4] Carlotto, T., & Jaques, P. A. (2016). The effects of animated pedagogical agents in an English-as-a-foreign-language learning environment. International Journal of Human-Computer Studies, 95, 15–26. https://doi.org/10.1016/j.ijhcs.2016.06.001.10.1016/j.ijhcs.2016.06.001Search in Google Scholar

[5] Cassell, J., Bickmore, T., Campbell, L., Vilhjálmsson, H., & Yan, H. (2000). Human conversation as a system framework: Designing embodied conversational agents. In J. Cassell, J. Sullivan, S. Prevost & E. Churchill (Eds.), Embodied conversational agents (pp. 29–63). Cambridge: MIT Press.10.7551/mitpress/2697.001.0001Search in Google Scholar

[6] Chang, W., & Sabanovic, S. (2015). Interaction Expands Function: Social Shaping of the Therapeutic Robot PARO in a Nursing Home. In 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp. 343–350). IEEE.10.1145/2696454.2696472Search in Google Scholar

[7] Dehn, D. M., & van Mulken, S. (2000). The impact of animated interface agents: a review of empirical research. International Journal of Human-Computer Studies, 52(1), 1–22. https://doi.org/10.1006/ijhc.1999.0325.10.1006/ijhc.1999.0325Search in Google Scholar

[8] Domagk, S. (2010). Do pedagogical agents facilitate learner motivation and learning outcomes? The role of the appeal of agent’s appearance and voice. Journal of Media Psychology, 22(2), 84–97. https://doi.org/10.1027/1864-1105/a000011.10.1027/1864-1105/a000011Search in Google Scholar

[9] Döring, M., Poeschl, S., Gross, H. M., Bley, A., Martin, C., & Boehme, H. J. (2015). User-Centered Design and Evaluation of a Mobile Shopping Robot. International Journal of Social Robotics, 7, 203–225. https://doi.org/10.1007/s12369-014-0257-8.10.1007/s12369-014-0257-8Search in Google Scholar

[10] Fox, J., Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N. (2015). Avatars versus agents: A meta-analysis quantifying the effects of agency on social influence. Human-Computer Interaction, 30(5), 401–432. https://doi.org/10.1080/07370024.2014.921494.10.1080/07370024.2014.921494Search in Google Scholar

[11] Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., van der Werf, R. J., & Morency, L. P. (2006). Virtual Rapport. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent Virtual Agents. Lecture Notes in Computer Science, Vol. 4133 (6th ed., pp. 14–27). Springer. https://doi.org/10.1007/11821830_2.10.1007/11821830_2Search in Google Scholar

[12] Hall, J. A. (1984). Nonverbal sex differences. Communication accuracy and expressive style. Johns Hopkins University Press.Search in Google Scholar

[13] Hoffmann, L. (2017). That robot touch that means so much. On the psychological effects of human-robot touch. (Dissertation), University Duisburg-Essen.Search in Google Scholar

[14] Hoffmann, L., & Krämer, N. C. (2013). Investigating the effects of physical and virtual embodiment in task-oriented and conversational contexts. International Journal of Human-Computer Studies, 71(7-8), 763–774. https://doi.org/10.1016/j.ijhcs.2013.04.007.10.1016/j.ijhcs.2013.04.007Search in Google Scholar

[15] Hoffmann, L., & Krämer, N. C. (2021). The persuasive power of robot touch. Behavioral and evaluative consequences of non-functional touch from a robot. PLOS ONE, 16(5): e0249554. https://doi.org/10.1371/journal.pone.0249554.10.1371/journal.pone.0249554Search in Google Scholar PubMed PubMed Central

[16] Horstmann, A. C., Bock, N., Linhuber, E., Szczuka, J. M., Straßmann, C., & Krämer, N. C. (2018). Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLOS ONE, 13(7), e0201581. https://doi.org/10.1371/journal.pone.0201581.10.1371/journal.pone.0201581Search in Google Scholar PubMed PubMed Central

[17] Hosseinpanah, A., Krämer, N. C., & Straßmann, C. (2018, December). Empathy for everyone? The effect of age when evaluating a virtual agent. In Proceedings of the 6th International Conference on Human-Agent Interaction (pp. 184–190), Southampton, United Kingdom. https://doi.org/10.1145/3284432.3284442.10.1145/3284432.3284442Search in Google Scholar

[18] Klatt, J., Haferkamp, N., Tetzlaff, L., & Krämer, N. C. (2012, June). How to be… a leader – Examining the impact of gender and nonverbal behaviour on the perception of leaders [Paper presentation]. International Communication Association 62nd Annual Meeting, Phoenix, Arizona, United States.Search in Google Scholar

[19] Kopp, S. & Krämer, N. C. (2021). Revisiting human-agent communication: The importance of incremental co-construction and understanding mental states. Frontiers in Psychology, 12, 580955. https://doi.org/10.3389/fpsyg.2021.580955.10.3389/fpsyg.2021.580955Search in Google Scholar PubMed PubMed Central

[20] Kopp, S., Brandt, M., Buschmeier, H., Cyra, K., Freigang, F., Krämer, N., Kummert, F., Opfermann, C., Pitsch, K., Schillingmann, L., Straßmann, C., Wall, E., Yaghoubzadeh, R. (2018). Conversational assistants for elderly users – The importance of socially cooperative dialogue. In E. André, T. Bickmore, S. Vrochidis, & L. Wanner (Eds.), Proceedings of the AAMAS workshop on intelligent conversation agents in home and geriatric care applications co-located with the federated AI meeting. CEUR workshop proceedings, Vol. 2338 (pp. 10–17). Aachen: RWTH.Search in Google Scholar

[21] Krämer, N. C. (2005). Social communicative effects of a virtual program guide. In T. Panayiotopoulos, J. Gratch, R. Aylett, D. Ballin, P. Olivier, & T. Rist (Eds.), Intelligent Virtual Agents. IVA 2005. Lecture Notes in Computer Science, Vol. 3661 (pp. 442–543). Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550617_37.10.1007/11550617_37Search in Google Scholar

[22] Krämer, N. C. (2017). The immersive power of social interaction. In D. Liu, C. Dede, R. Huang, & J. Richards (Eds.), Smart computing and intelligence (pp. 55–70). Springer, Singapore. https://doi.org/10.1007/978-981-10-5490-7_4.10.1007/978-981-10-5490-7_4Search in Google Scholar

[23] Krämer, N., & Bente, G. (2001). Mehr als Usability: (Sozial-)psychologische Aspekte bei der Evaluation von anthropomorphen Interface-Agenten (More than usability: (Socio-) psychlogical aspects in the evaluation of anthropomorphic interface agents). i-com, p. 26. https://doi.org/10.1524/icom.2001.0.0.26.10.1524/icom.2001.0.0.26Search in Google Scholar

[24] Krämer, N. C., Bente, G., & Piesk, J. (2003). The ghost in the machine. The influence of embodied conversational agents on user expectations and user behaviour in a TV/VCR application. In G. Bieber & T. Kirste (Eds.), IMC workshop, assistance, mobility, applications (pp. 121–128). IRB Verlag.Search in Google Scholar

[25] Krämer, N. C., Hoffmann, L., & Kopp, S. (2010). Know your users! Empirical results for tailoring an agent’s nonverbal behavior to different user groups. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent Virtual Agents. IVA 2010. Lecture notes in computer science, Vol. 6356 (pp. 468–474). Springer. http://doi.org/10.1007/978-3-642-15892-6_50.10.1007/978-3-642-15892-6_50Search in Google Scholar

[26] Krämer, N. C., Karacora, B., Lucas, G., Dehghani, M., Rüther, G., & Gratch, J. (2016). Closing the gender gap in STEM with friendly male instructors? On the effects of rapport behavior and gender of a virtual agent in an instructional interaction. Computers & Education, 99, 1–13. https://doi.org/10.1016/j.compedu.2016.04.002.10.1016/j.compedu.2016.04.002Search in Google Scholar

[27] Krämer, N. C., Kopp, S., Becker-Asano, C., & Sommer, N. (2013). Smile and the world will smile with you – The effects of a virtual agent’s smile on users’ evaluation and behavior. International Journal of Human-Computer Studies, 71(3), 335–349. http://doi.org/10.1016/j.ijhcs.2012.09.006.10.1016/j.ijhcs.2012.09.006Search in Google Scholar

[28] Krämer, N. C., & Manzeschke, A. (2021). Social Reactions to Socially Interactive Agents and Their Ethical Implications. In The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics (Vol. 1): Methods, Behavior, Cognition (1st ed., pp. 77–104). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3477322.3477326.10.1145/3477322.3477326Search in Google Scholar

[29] Krämer, N. C., Rosenthal-von der Pütten, A. M., & Hoffmann, L. (2015). Social effects of virtual and robot companions. In S. S. Sundar (Ed.), The handbook of the psychology of communication technology (1st ed., pp. 137–159). John Wiley & Sons. https://doi.org/10.1002/9781118426456.ch6.10.1002/9781118426456.ch6Search in Google Scholar

[30] Krämer, N. C., Rüggenberg, S., Meyer zu Kniendorf, C., & Bente, G. (2002). Schnittstelle für alle? Möglichkeiten zur Anpassung anthropomorpher Interface Agenten an verschiedene Nutzergruppen [Interface for everyone? Options for adapting anthropomorphic interface agents to different user groups]. In M. Herczeg, W. Prinz, & H. Oberquelle (Eds.), Mensch und Computer. Berichte des German Chapter of the ACM, Vol. 56 (pp. 125–134). Teubner. http://doi.org/10.1007/978-3-322-89884-5_13.10.1007/978-3-322-89884-5_13Search in Google Scholar

[31] Krämer, N. C., Simons, N., & Kopp, S. (2007). The effects of an embodied conversational agent’s nonverbal behavior on user’s evaluation and behavioral mimicry. In C. Pelachaud, J. C. Martin, E. André, G. Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent Virtual Agents (IVA 2007). Lecture notes in computer science, Vol. 4722 (7th ed., pp. 238–251). Springer, Berlin, Heidelberg. http://doi.org/10.1007/978-3-540-74997-4_22.10.1007/978-3-540-74997-4_22Search in Google Scholar

[32] Langer, E. J. (1989). Mindfulness. Addison-Wesley.Search in Google Scholar

[33] Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal, R. S. (1997). The persona effect: affective impact of animated pedagogical agents. In S. Pemberton (Ed.), Human Factors in Computing Systems: CHI’97 Conference Proceedings (pp. 359–366). New York: ACM Press.10.1145/258549.258797Search in Google Scholar

[34] Lew, Z., Walther, J. B., Pang, A., & Shin, W. (2018, July). Interactivity in Online Chat: Conversational Contingency and Response Latency in Computer-mediated Communication. Journal of Computer-Mediated Communication, 23(4), 201–221. https://doi.org/10.1093/jcmc/zmy009.10.1093/jcmc/zmy009Search in Google Scholar

[35] Marsella, S. C., & Gratch, J. (2009). EMA: A process model of appraisal dynamics. Cognitive Systems Research, 10(1), 70–90. https://doi.org/10.1016/j.cogsys.2008.03.005.10.1016/j.cogsys.2008.03.005Search in Google Scholar

[36] Maricchiolo, F., Gnisci, A., Bonaiuto, M., & Ficca, G. (2009). Effects of different types of hand gestures in persuasive speech on receivers’ evaluations. Language and Cognitive Processes, 24(2), 239–266. https://doi.org/10.1080/01690960802159929.10.1080/01690960802159929Search in Google Scholar

[37] Moreno, R. (2003). The role of software agents in multimedia learning environments: When do they help students reduce cognitive load? [Paper presentation]. European Association for Research on Learning and Instruction Annual Conference, Padova, Italy.Search in Google Scholar

[38] Morkes, J., Kernal, H. K., & Nass, C. (1999). Effects of humor in task-oriented human-computer interaction and computer-mediated communication: A direct test of SRCT theory. Human-Computer Interaction, 14(4), 395–435. https://doi.org/10.1207/S15327051HCI1404_2.10.1207/S15327051HCI1404_2Search in Google Scholar

[39] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153.10.1111/0022-4537.00153Search in Google Scholar

[40] Nass, C., & Yen, C. (2012). The man who lied to his laptop. What we can learn about ourselves from our machines. Penguin.Search in Google Scholar

[41] Nass, C., Moon, Y., & Carney, P. (1999). Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied Social Psychology, 29(5), 1093–1110. https://doi.org/10.1111/j.1559-1816.1999.tb00142.x.10.1111/j.1559-1816.1999.tb00142.xSearch in Google Scholar

[42] Nowak, K. L., & Rauh, C. (2005). The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. Journal of Computer‐Mediated Communication, 11(1), 153–178. https://doi.org/10.1111/j.1083-6101.2006.tb00308.x.10.1111/j.1083-6101.2006.tb00308.xSearch in Google Scholar

[43] Parise, S., Kiesler, S., Sproull, L., & Waters, K. (1999). Cooperating with life-like interface agents. Computers in Human Behavior, 15(2), 123–142. https://doi.org/10.1016/S0747-5632(98)00035-1.10.1016/S0747-5632(98)00035-1Search in Google Scholar

[44] Rajan, S., Craig, S. D., Gholson, B., Person, N. K., Graesser, A. C., & The Tutoring Research Group. (2001). AutoTutor: Incorporating back-channel feedback and other human-like conversational behaviors into an intelligent tutoring system. International Journal of Speech Technology, 4, 117–126. https://doi.org/10.1023/A:1017319110294.10.1023/A:1017319110294Search in Google Scholar

[45] Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. CSLI Publications.Search in Google Scholar

[46] Rickenberg, R., & Reeves, B. (2000). The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 49–56), The Hague, Netherlands. https://doi.org/10.1145/332040.332406.10.1145/332040.332406Search in Google Scholar

[47] Rosenthal-von der Pütten, A. M., Krämer, N. C., & Herrmann, J. (2018). The Effects of Humanlike and Robot-Specific Affective Nonverbal Behavior on Perception, Emotion, and Behavior. International Journal of Social Robotics, 10, 569–582. https://doi.org/10.1007/s12369-018-0466-7.10.1007/s12369-018-0466-7Search in Google Scholar

[48] Rosenthal-von der Pütten, A., Krämer, N. C., Maderwald, S., Brand, M. & Grabenhorst, F. (2019). Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley. Journal of Neuroscience, 39(33), 6555–6570. https://doi.org/10.1523/JNEUROSCI.2956-18.2019.10.1523/JNEUROSCI.2956-18.2019Search in Google Scholar PubMed PubMed Central

[49] Rosenthal-von der Pütten, A. M., Schulte, F. P., Eimler, S. C., Sobieraj, S., Hoffmann, L., Maderwald, S., Brand, M., & Krämer, N. C. (2014). Investigations on empathy towards humans and robots using fMRI. Computers in Human Behavior, 33, 201–212. https://doi.org/10.1016/j.chb.2014.01.004.10.1016/j.chb.2014.01.004Search in Google Scholar

[50] Rosenthal-von der Pütten, A. M., Straßmann, C., Yaghoubzadeh, R., Kopp, S., & Krämer, N. C. (2019). Dominant and submissive nonverbal behavior of virtual agents and its effects on evaluation and negotiation outcome in different age groups. Computers in Human Behavior, 90, 397–409. https://doi.org/10.1016/j.chb.2018.08.047.10.1016/j.chb.2018.08.047Search in Google Scholar

[51] Schroeder, N. L., & Adesope, O. O. (2014). A systematic review of pedagogical agents’ persona, motivation, and cognitive load implications for learners. Journal of Research on Technology in Education, 46(3), 229–251. https://doi.org/10.1080/15391523.2014.888265.10.1080/15391523.2014.888265Search in Google Scholar

[52] Sproull, L., Subramani, M., Kiesler, S., Walker, J. H., & Waters, K. (1996). When the interface is a face. Human-Computer Interaction, 11(2), 97–124.10.1207/s15327051hci1102_1Search in Google Scholar

[53] Sträfling, N., Fleischer, I., Polzer, C., Leutner, D., & Krämer, N. C. (2010). Teaching learning strategies with a pedagogical agent. The effects of a virtual tutor and its appearance on learning and motivation. Journal of Media Psychology, 22(2), 73–83. https://doi.org/10.1027/1864-1105/a000010.10.1027/1864-1105/a000010Search in Google Scholar

[54] Straßmann, C. (2017). All eyes on the agent’s appearance?! Investigation of target-group-related social effects of a virtual agent’s appearance in longitudinal human-agent interactions. (Dissertation). University of Duisburg-Essen.Search in Google Scholar

[55] Straßmann, C., & Krämer, N. C. (2017). A categorization of virtual agent appearances and a qualitative study on age-related user preferences. In J. Beskow, C. Peters, G. Castellano, C. O’Sullivan, I. Leite, & S. Kopp (Eds.), Intelligent virtual agents. Lecture Notes in Computer Science, Vol. 10498 (17th ed., pp. 413–422). Springer. https://doi.org/10.1007/978-3-319-67401-8_51.10.1007/978-3-319-67401-8_51Search in Google Scholar

[56] Straßmann, C., & Krämer, N. C. (2018). A two-study approach to explore the effect of user characteristics on users’ perception and evaluation of a virtual assistant’s appearance. Multimodal Technologies and Interaction, 2(4), 1–25. https://doi.org/10.3390/mti2040066.10.3390/mti2040066Search in Google Scholar

[57] Sundar, S. S. (2020). Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026.10.1093/jcmc/zmz026Search in Google Scholar

[58] Szczuka, J. M., & Krämer, N. C. (2017). Not only the lonely – How men explicitly and implicitly evaluate the attractiveness of sex robots in comparison to the attractiveness of women, and personal characteristics influencing this evaluation. Multimodal Technologies and Interaction, 1(1), Article 3. https://doi.org/10.3390/mti1010003.10.3390/mti1010003Search in Google Scholar

[59] Takeuchi, A., & Naito, T. (1995). Situated facial displays: towards social interaction. In I. Katz, R. Mack, L. Marks, M. B. Rosson & J. Nielsen (Eds.), Human factors in computing Systems: CHI’95 Conference Proceedings (pp. 450–455). New York: ACM Press.10.1145/223904.223965Search in Google Scholar

[60] van Vugt, H. C., Konijn, E. A., Hoorn, J. F., Keur, I., & Eliéns, A. (2007). Realism is not all! User engagement with task-related interface characters. Interacting with Computers, 19(2), 267–280. https://doi.org/10.1016/j.intcom.2006.08.005.10.1016/j.intcom.2006.08.005Search in Google Scholar

[61] Vogeley, K., & Bente, G. (2010). “Artificial humans”: Psychology and neuroscience perspectives on embodiment and nonverbal communication. Neural Networks, 23(8-9), 1077–1090. https://doi.org/10.1016/j.neunet.2010.06.003.10.1016/j.neunet.2010.06.003Search in Google Scholar PubMed

[62] von der Pütten, A. M., Hoffmann, L., Klatt, J., & Krämer, N. C. (2011). Quid pro quo? Reciprocal self-disclosure and communicative accomodation towards a virtual interviewer. In H. H. Vilhjálmsson, S. Kopp, S. Marsella, & K. R. Thórisson (Eds.), Intelligent virtual agents. Lecture notes in computer science, Vol. 6895 (11th ed., pp. 183–194). Springer. https://doi.org/10.1007/978-3-642-23974-8_20.10.1007/978-3-642-23974-8_20Search in Google Scholar

[63] von der Pütten, A. M., Krämer, N. C., & Gratch, J. (2010). How our personality shapes our interactions with virtual characters – Implications for research and development. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents. Lecture notes in computer science, Vol. 6356 (10th ed., pp. 208–221). Springer. https://doi.org/10.1007/978-3-642-15892-6_23.10.1007/978-3-642-15892-6_23Search in Google Scholar

[64] von der Pütten, A. M., Krämer, N. C., Gratch, J., & Kang, S. H. (2010). “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Computers in Human Behavior, 26(6), 1641–1650. https://doi.org/10.1016/j.chb.2010.06.012.10.1016/j.chb.2010.06.012Search in Google Scholar

Published Online: 2021-11-27
Published in Print: 2021-12-20

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2021-0032/html
Scroll to top button