Elsevier

Computers in Human Behavior

Volume 90, January 2019, Pages 331-342
Computers in Human Behavior

Full length article
Emotional processes in human-robot interaction during brief cognitive testing

https://doi.org/10.1016/j.chb.2018.08.013Get rights and content

Highlights

  • A neglected area of research is the use of social robots for brief cognitive assessment.

  • People's emotional reactions towards a robot and a human examiner did not differ.

  • Gaze patterns might be considered behavioural markers for assessing cognitive load.

  • Humanoid robots may be a way to open up the development of new brief cognitive tests.

Abstract

With the rapid rise in robot presence in a variety of life domains, understanding how robots influence people's emotions during human-robot interactions is important for ensuring their acceptance in society. Mental health care, in particular, is considered the field in which robotics technology will bring the most dramatic changes in the near future. In this context, the present study sought to determine whether a brief cognitive assessment conducted by a robot elicited different interaction-related emotional processes than a traditional assessment conducted by an expert clinician. A non-clinical sample of 29 young adults (17 females; M = 24.5, SD = 2.3 years) were asked to complete two cognitive tasks twice, in counterbalanced order, once administered by an expert clinician and once by an autonomous humanoid robot. Self-reported measures of affective states and assessment of physiological arousal did not reveal any difference in emotional processes between human-human and human-robot interactions. Similarly, cognitive performances and workload did not differ across conditions. Analysis of non-verbal behaviour, however, showed that participants spent more time looking at the robot (d = 1.3) and made fewer gaze aversions (d = 1.3) in interacting with the robot than with the human examiner. We argue that, far from being a trivial ‘cosmetic change’, using a social robot in place of traditional testing could be a potential way to open up the development of a new generation of tests for brief cognitive assessment.

Introduction

Social robots are considered ‘relational artifacts’ (Turkle, Taggart, Kidd, & Dasté, 2006) that differ from other information technologies since they can physically interact with real-world objects and people through verbal, non-verbal or affective modalities (Breazeal, Dautenhahn, & Kanda, 2016). With the advent of robots capable of understanding and communicating in a human-like way (Nourbakhsh, 2013), robots are expected to pervasively enter our everyday environments and become social agents with which people will socially engage for a variety of purposes, ‘from love to war’ (Royakkers & van Est, 2015).

Mental health care, in particular, is considered the field in which robotics technology will bring the most dramatic changes in the near future (Rabbitt, Kazdin, & Scassellati, 2015). Social robots have been successfully introduced in mental healthcare scenarios for a variety of purposes (Riek, 2015), most notably as companions for older adults, to improve psychosocial outcomes (Bemelmans, Gelderblom, Jonker, & De Witte, 2012), to prevent cognitive decline (Shibata & Wada, 2011), and to improve the effectiveness of interventions targeting children with autism spectrum disorders (e.g. Desideri et al., 2018; for a recent review, see; Pennisi et al., 2016).

Surprisingly, a neglected area of research is the employment of social robots for cognitive assessment, which represents an essential aspect of any mental healthcare service, and in which digital innovation is rapidly replacing more traditional practices (e.g. Desideri et al., 2016, Pedroli et al., 2018, Thompson et al., 2007, Williams and McCord, 2006). Each year, millions of tests are carried out involving children and adults in many societies (Sternberg & Grigorenko, 2002), and the results from the assessment of cognitive functioning often represent a gateway for individuals’ opportunities with regard to funding eligibility, as well as access to health or educational services and work.

Brief cognitive testing (BCT) is increasing at a rapid pace (Roebuck-Spencer et al., 2017). Unlike comprehensive neuropsychological test batteries, BCT refers to the rapid assessment of a limited set of cognitive functions for the early identification of individuals in need of more comprehensive evaluation (Roebuck-Spencer et al., 2017). As such, BCT is ever more frequently used to monitor the mental health status and needs of diverse non-clinical populations in primary care as well as in other settings, such as workplaces and schools (Ouvrier, Hendy, Bornholt, & Black, 1999).

The importance of BCT is also evident in educational settings in order to identify both frail and gifted students (Card and Giuliano, 2016, Ouvrier et al., 1999). With social robots increasingly entering teaching practices (Kanero et al., 2018), it is plausible to imagine them also used to test students’ cognitive abilities by combining the general benefits of technology – such as rapid and accurate data gathering – with engaging and adaptable testing scenarios (Westlund et al., 2017).

In primary care, BCT may be used to detect early signs of dementia before functional impairment becomes evident. Unfortunately, health professionals’ limited time and lack of resources often prevent BCT from being administered in time, with the frequent consequence of delayed dementia diagnosis (Bradford, Kunik, Schulz, Williams, & Singh, 2009). As social robots are increasingly used with older adults without cognitive impairments, to engage them in cognitive games as well as physical exercise (Bemelmans et al., 2012, Fasola and Mataric, 2013), drawing on devices that are also capable of constantly keeping track of the cognitive status of their human interaction partners might offer a valuable strategy in expanding BCT coverage for ageing populations.

BCT is also commonly used with older workers at risk of cognitive decline (Mair & Starr, 2011). In this context, robot-performed BCT may result as useful in hazardous workplace environments where workers are engaged in complex human-robot teaming tasks (Jung et al., 2013, Thomas et al., 2016). By providing the robot with the ability to infer the interaction partner's mental capacities through simple, structured interactions, it would be possible for the robot to regulate interaction complexity (Matarić & Scassellati, 2016), as well as constantly estimate vigilance level and the consequent assistance required by the human employees, or refer them for a more comprehensive assessment if necessary.

Given this potential, we felt it our duty as researchers to investigate how people would react during interaction with a robot performing a brief cognitive assessment, and compare this to the same situation involving a human interlocutor.

Seminal research conducted by Nass and colleagues (Nass and Moon, 2000, Reeves and Nass, 1996) has shown that, in a variety of circumstances, people “respond socially and naturally to media” (Reeves & Nass, 1996, pp. 7), interacting with these devices as if they were human even though they know this is not the case. This effect has been formally conceptualized within the Computers are Social Actors (CASA) paradigm (Reeves & Nass, 1996), which hypothesizes that people will apply to human-computer interaction the same social scripts that guide human-human interaction (Reeves & Nass, 1996; see also; Broadbent, 2017, Kim and Sundar, 2012).

Within this perspective, researchers in the field of human-robot interaction (HRI) have recently argued that the CASA paradigm may also be extended to human-robot interactions to understand how people respond to social robots (Edwards et al., 2016, Rosenthal-von der Pütten et al., 2013). To date, several social robots with different appearances and capabilities have been developed to engage people in an interpersonal manner (for an overview, see Breazeal et al., 2016). For the purposes of this study, we focused on humanoid robots, which are robots that share similar physical appearance and kinematics to humans, as well as similar sensing and behaviour (Fitzpatrick et al., 2016).

Following the CASA paradigm, the present investigation is theoretically grounded on the hypothesis that there may be many commonalities between human-human interaction and human-humanoid robot interaction (Bartneck & Hu, 2008). Due to their resemblance to humans, humanoid robots are thought to facilitate social interaction and communication, as they possess all the necessary features to convey social signals (Breazeal, 2003, Dautenhahn, 2007, Fink, 2012). Here, we assume that humanoid robots are perceived as social entities that may evoke people's emotional reactions similar to those evoked during human-human interactions. Emotional processes serve a social function as they allow us to form and maintain social relationships or create distance from others (Keltner & Kring, 1998). Social stimuli, in particular, determine the way we experience and express emotions during social interactions (Lindblom & Ziemke, 2007). For instance, as noted by Lorenz, Weiss, and Hirche (2016) for human-human communication, a person might become irritated if the interaction flow is not smooth due to a physical impairment (e.g. dysarthria) in the interaction partner. Under this hypothesis, the same might be considered valid also for human-robot interaction, in which even the smallest divergence from expected social dynamics may affect the emotional tone of the interaction (Wykowska, Chellali, Al-Amin, & Müller, 2012).

In addition, central to the scope of the present study is that mental processes are intrinsically tied to – and fundamentally inseparable from ‒ the body's states, actions, and sensory-motor and affective experiences (Barsalou, 1999, Barsalou, 2009). Within this perspective, social relations and emotional processes are closely linked, and their interactions contribute to generating mental processes (Pessoa, 2008). For instance, it has been widely documented that negative (e.g. irritation, sadness) and positive (e.g. happiness) emotional states differently modulate cognitive performance in a variety of tasks, including those involving working memory and executive control (e.g. Gray, 2001, Storbeck and Maswood, 2016).

The current study was thus designed to test whether humanoid robots, “in the way that they communicate, instruct and take turns interacting, are close enough to human that they encourage social responses” (Reeves & Nass, 1996, p. 22), so that participants’ emotional reactions and test performance in a brief cognitive assessment conducted by a humanoid robot would resemble those of a similar assessment conducted by an expert clinician.

Previous research addressing emotional processes in human-robot interaction has often endorsed the view that emotions produce experiential (e.g. feeling angry), behavioural (e.g. severe frown) and physiological (e.g. heart rate acceleration) outputs (Bethel et al., 2007, Libin and Libin, 2003, Rosenthal-von der Pütten et al., 2013, Tiberio et al., 2013). These outputs can be grouped along two bipolar but independent dimensions of emotional experience, namely valence (how positive or negative) and arousal (how exciting or calming) (see e.g. Barrett and Russell, 1999, Kensinger and Corkin, 2004, Pollick et al., 2001). While valence can be accessed through verbal behaviour, usually in the form of self-reports using Likert-type scales, arousal may be better captured by assessing embodied processes such as non-verbal behaviour and physiological reactions (Bethel et al., 2007, Lottridge et al., 2011).

According to Watson and Tellegen (Watson, Wiese, Vaidya, & Tellegen, 1999), valence includes a positive affect (i.e. the extent to which one is experiencing a positive mood, such as enthusiasm) and a negative affect (i.e. the extent to which one is experiencing a negative mood, such as guilt or anger). Although scant attention has been paid to emotional reactions displayed by people in regulating their interaction with a humanoid robot rather than a human agent (Rosenthal-von der Pütten et al., 2013, Vincent et al., 2015, de Graaf, 2016), there is evidence that people have more positive attitudes towards robots that are somewhat human-like, yet they find very human-like robots unnerving (Mori, 2012). Recently, a study by Brink, Gray, & Wellman, (2017) showed that children and adolescents aged from 3 to 18 years perceive a very human-like robot as creepy or weird when compared to a more abstractly humanoid robot (the robot NAO, depicted in Fig. 1), which was perceived as having a neutral appearance (Brink et al., 2017). Thus, among the general population, anthropomorphic realism in robots is thought to generate a negative affective state marked by an overt sense of strangeness or even disgust in the perceiver (Gray and Wegner, 2012, MacDorman et al., 2009), which may in turn influence the observer's behaviour. Mathur and Reichling (2016), for instance, showed that humanoid robots with a clearly mechanical appearance are considered as more trustworthy than somewhat human-like robots, demonstrated by an experiment in which people tended to entrust more money to the former than the latter in a wagering game (Mathur & Reichling, 2016). In light of this evidence, in order to reduce the risk of provoking a participant's negative reaction that could potentially affect cognitive performance, in the current study we used the humanoid robot NAO (Gouaillier et al., 2009) to administer the BCT, comparing this involvement with that of an expert clinician.

Non-verbal behaviour is an essential aspect of face-to-face communication (Wachsmuth, Lenzen, & Knoblich, 2008), and conveys important affective and emotional information (Roter, Frankel, Hall, & Sluyter, 2006). Non-verbal behaviour is defined as including a variety of communicative behaviours that do not carry linguistic content (Knapp, Hall, & Horgan, 2013). Eye-gaze behaviour, in particular, plays a crucial role in regulating communicative processes (Argyle, 1972, Emery, 2000, Kleinke, 1986), and is the non-verbal behaviour that has been most widely investigated within the HRI field (for a review, see Admoni & Scassellati, 2017). Two interrelated types of gaze behaviours, in particular, seem to convey important information concerning the interlocutor's emotional states during patient-clinician encounters (Hall et al., 1995, Troisi, 1999): face-directed gaze and gaze avoidance. During face-to-face interactions, conversational partners usually tend to look at each other's face, signalling attention and interest towards the interaction partner (Argyle, 1972). In this context, interaction partners may engage in eye contact or mutual gazes, which are believed to modulate activity in structures in the social brain network and have a substantial influence on ongoing interlocutors' cognitive processes (Senju & Johnson, 2009). When interacting in alternation between a robot and another person, for instance, people spend more time looking at a robot's face than at a human's when naming an object (Yu, Schermerhorn, & Scheutz, 2012).

Gaze aversion, on the contrary, is defined as an intentional redirection of gaze away from the interlocutor's face. Averting the gaze during face-to-face interaction is a non-verbal behaviour that may be triggered by a variety of factors, including cultural (e.g. avoiding sustained eye contact; Edelmann et al., 1989), social (e.g. interrupting eye contact to end a conversation; Williams et al., 1998, Wirth et al., 2010), emotional (e.g. embarrassment; Keltner & Anderson, 2000) and attention-based (e.g. reflexive shift of attention towards the direction of the interlocutor's gaze; Hadjikhani, Hoge, Snyder, & de Gelder, 2008) ones. Negative affective states such as anxiety or embarrassment, in particular, may trigger gaze aversions during face-to-face interactions (Costa, Dinsbach, Manstead, & Ricci Bitti, 2001). This avoidance may be due to heightened self-focus during social interactions in order to cope with expectations about the other's negative evaluation (Mellings & Alden, 2000).

Available HRI research suggests that, when engaged in a human-robot interaction, people tend to produce less gaze avoidance than in a human-human interaction. Bartneck, Bleeker, Bun, Fens, and Riet (2010) found that participants reported feeling more embarrassed and that they engaged in more gaze aversions and hand movements towards the face when examined by a life-like robot than by an ordinary medical device (Bartneck et al., 2010). Similarly, in a study conducted by Wood et al. (2013), children aged between 7 and 9 years were interviewed about a recent social event both by a human interviewer and by a humanoid robot. Analysis of gaze patterns showed that children looked towards the face of the humanoid robot significantly more, thus producing fewer gaze aversions, a result that was interpreted as evidence of the children's greater interest in the novel object (Wood et al., 2013).

In addition to gaze behaviour, the study by Bartneck et al. (2010) highlights that gestures and postures may also represent important embodied aspects of social interactions (Troisi, 1999), and should be considered as important cues in disambiguating emotional expressions coming from the face (Aviezer, Trope, & Todorov, 2012).

In the HRI field, increasing interest is being devoted to the search for indicators of physiological arousal in people while interacting with a robot (Bethel et al., 2007, Rosenthal-von der Pütten et al., 2013). To this aim, the third aspect of emotional expression addressed in the present study regards the physiological signature of social engagement, this being heart rate variability (HRV) (Porges, 2003, Porges, 2007). HRV is a measure of the variation in the time interval between consecutive heartbeats; it reflects the interplay between sympathetic and parasympathetic influences on heart rate and carries information on the emotional response to internal (bodily) and external (environmental) stimuli (e.g. Appelhans and Luecken, 2006, Task Force of the European Society, 1996). According to one of the most influential accounts in the field (i.e. Polyvagal Theory; Porges, 2001, Porges, 2007), HRV is particularly suited to detecting the degree to which an individual perceives the environment as safe or threatening. Extensive empirical research supports the link between higher HRV levels and a greater ability to recognize, express and regulate emotions (see Balzarotti, Biassoni, Colombo, & Ciceri, 2017, and Holzman & Bridgett, 2017, for recent reviews and meta-analysis), and therefore to have more adaptive social interactions (see Colonnello, Petrocchi, Farinelli, & Ottaviani, 2017, and Shahrestani, Stewart, Quintana, Hickie, & Guastella, 2015, for recent reviews and meta-analysis).

Few HRI studies addressing human physiological reactions during human-robot interaction have systematically assessed whether such reactions would bear a resemblance to those elicited in human-human interactions. In general, it can be argued that, for a possible difference in people's reactions to emerge during HRI, emotionally-valenced experimental conditions would be necessary. In this vein, for instance, Rosenthal-von der Pütten et al. (2014) showed videos of either an animal-like robot or a human under torture by a human. Analyses of neurophysiological and subjective reactions revealed that human-robot interaction seemed to elicit less emotional distress and negative empathetic concern than in the case of the human victim (Rosenthal-von der Pütten et al., 2014). On the contrary, when the interaction does not include emotionally-valenced situations, physiological reactions seem to be equivalent when interacting with a human or a robot. In this view, Tanaka et al. (2012) involved a sample of elderly women in order to assess the effects of living with an interactive humanoid robot on cognitive functioning and HRV, compared with the effects of living with a non-interactive humanoid robot. After 4 and 8 weeks, global cognitive functioning, judgment capacity and verbal memory were improved in the group of women living with the interactive robot, while HRV did not significantly differ between the two groups.

Here we present a real-world robot study in which each participant was asked to complete two cognitive tasks twice, once administered by an expert clinician and once by a humanoid robot, with the aim to assess whether interacting with a humanoid robot or a human examiner would produce different emotional reactions and different test performances. For the purposes of the current study, we chose to involve a non-clinical sample of young adults since the cognitive tasks administered are conceived as a BCT tool applicable to the general population. The tasks administered were developed to cover the two functional cognitive abilities most commonly tapped by BCT tests for large-scale community screening, namely calculation and recall (Cullen, O'Neill, Evans, Coen, & Lawlor, 2007).

The research question addressed here was: when compared to assessment by a human examiner, does being assessed by a humanoid robot differently affect test accuracy; emotional valence; physiological reactions; non-verbal behaviour patterns?

According to the CASA paradigm, we predicted that using a humanoid robot with a neutral appearance (NAO) would not influence test accuracy (H1; Brink et al., 2017, Mathur and Reichling, 2016), emotional valence (H2; Brink et al., in press) or physiological reactions (H3; Tanaka et al., 2012), compared to the same assessment performed by a human interlocutor. We also hypothesized that participants would generate gaze aversions and direct looking behaviours in both conditions (robot vs. human), but that would spend more time looking at the interlocutor's face and generate fewer gaze aversions when interacting with a robot rather than a human examiner (H4; Bartneck et al., 2010, Wood et al., 2013, Yu et al., 2012).

Section snippets

Study design and participants

To test our hypotheses, we used a within-subjects experimental design. Participants were undergraduate volunteers enrolled in psychology courses at a local university. The sample consisted of 29 participants (17 females; M = 24.6 years, SD = 2.42 years). The sample size was established on the basis of similar previous research addressing emotional reactions in a comprehensive manner and in which sample size ranged between 14 and 41 (Rosenthal-von der Pütten et al., 2013, Rosenthal-von der

Sample characteristics

As illustrated in Table 2, participants’ anxiety levels were within a non-clinical range. No gender differences were found in terms of state anxiety, trait anxiety, motivation and mood, and both negative and positive affect reported at baseline. State anxiety resulted significantly associated with positive (r = −0.5, p = 0.006) and negative (r = 0.52, p = 0.004) affect reported at baseline. Furthermore, both state and trait anxiety were associated with a more negative mood (r = −0.55, p

Discussion

The aim of the present study was to comprehensively explore participants’ emotional reactions when interacting with a humanoid robot in comparison with a human interlocutor, and to evaluate whether the test administered by the robot would yield different results to that administered by an expert clinician. To this end, we used a multi-dimensional approach to investigate emotional processes, including subjective, non-verbal and physiological aspects of affective expression.

Acknowledgements

The Authors would like to thank Alessandro Saracino and Fondazione Golinelli (fondazionegolinelli.it) for their invaluable support in the realization of this study.

References (131)

  • S.G. Hart et al.

    Development of NASA-TLX (task load index): Results of empirical and theoretical research

  • M. Hoenen et al.

    Non-anthropomorphic robots as social entities on a neurophysiological level

    Computers in Human Behavior

    (2016)
  • J.B. Holzman et al.

    Heart rate variability indices as bio-markers of top-down self-regulatory mechanisms: A meta-analytic review

    Neuroscience & Biobehavioral Reviews

    (2017)
  • Y. Kim et al.

    Anthropomorphism of computers: Is it mindful or mindless?

    Computers in Human Behavior

    (2012)
  • S.R. Langton et al.

    Attention capture by faces

    Cognition

    (2008)
  • K.F. MacDorman et al.

    Too real for comfort? Uncanny responses to computer generated faces

    Computers in Human Behavior

    (2009)
  • M.B. Mathur et al.

    Navigating a social world with robot partners: A quantitative cartography of the uncanny valley

    Cognition

    (2016)
  • T.M. Mellings et al.

    Cognitive processes in social anxiety: The effects of self-focus, rumination and anticipatory processing

    Behaviour Research and Therapy

    (2000)
  • J.P. Niskanen et al.

    Software for advanced HRV analysis

    Computer Methods and Programs in Biomedicine

    (2004)
  • C. Ottaviani et al.

    Cognitive rigidity is mirrored by autonomic inflexibility in daily life perseverative cognition

    Biological Psychology

    (2015)
  • K. Pierce et al.

    Eye tracking reveals abnormal visual preference for geometric images as an early biomarker of an autism spectrum disorder subtype associated with increased symptom severity

    Biological Psychiatry

    (2016)
  • F.E. Pollick et al.

    Perceiving affect from arm movement

    Cognition

    (2001)
  • S.W. Porges

    The polyvagal theory: Phylogenetic substrates of a social nervous system

    International Journal of Psychophysiology

    (2001)
  • S.W. Porges

    The polyvagal perspective

    Biological Psychology

    (2007)
  • S.M. Rabbitt et al.

    Integrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use

    Clinical Psychology Review

    (2015)
  • H. Admoni et al.

    Social eye gaze in human-robot interaction: A review

    Journal of Human-Robot Interaction

    (2017)
  • B.M. Appelhans et al.

    Heart rate variability as an index of regulated emotional responding

    Review of General Psychology

    (2006)
  • M. Argyle

    Non-verbal communication in human social interaction

  • H. Aviezer et al.

    Body cues, not facial expressions, discriminate between intense positive and negative emotions

    Science

    (2012)
  • L.F. Barrett et al.

    The structure of current affect: Controversies and emerging consensus

    Current Directions in Psychological Science

    (1999)
  • L.W. Barsalou

    Perceptions of perceptual symbols

    Behavioral and Brain Sciences

    (1999)
  • L.W. Barsalou

    Simulation, situated conceptualization, and prediction

    Philosophical Transactions of the Royal Society of London B Biological Sciences

    (2009)
  • C. Bartneck et al.

    The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots

    Paladyn. Journal of Behavioral Robotics

    (2010)
  • C. Bartneck et al.

    Exploring the abuse of robots

    Interaction Studies

    (2008)
  • G.W. Beattie

    A further investigation of the cognitive interference hypothesis of gaze patterns during conversation

    British Journal of Social Psychology

    (1981)
  • R. Bemelmans et al.

    Socially assistive robots in elderly care: A systematic review into effects and effectiveness

    Journal of the American Medical Directors Association

    (2012)
  • C.L. Bethel et al.

    Survey of psychophysiology measurements applied to human-robot interaction

  • T. Blekher et al.

    Saccades in presymptomatic and early stages of Huntington disease

    Neurology

    (2006)
  • T.D. Borkovec

    The nature, function, and origins of worry

  • A. Bradford et al.

    Missed and delayed diagnosis of dementia in primary care: Prevalence and contributing factors

    Alzheimer Disease and Associated Disorders

    (2009)
  • C. Breazeal et al.

    Social robotics

  • K.A. Brink et al.

    Creepiness creeps

  • E. Broadbent

    Interactions with robots: The truths we reveal about ourselves

    Annual Review of Psychology

    (2017)
  • D. Card et al.

    Universal screening increases the representation of low-income and minority students in gifted education

    Proceedings of the National Academy of Sciences

    (2016)
  • J. Cohen

    Statistical power analysis for the behavioral sciences

    (1988)
  • V. Colonnello et al.

    Positive social interactions in a lifespan perspective with a focus on opioidergic and oxytocinergic systems: Implications for neuroprotection

    Current Neuropharmacology

    (2017)
  • M. Costa et al.

    Social presence, embarrassment, and nonverbal behavior

    Journal of Nonverbal Behavior

    (2001)
  • B. Cullen et al.

    A review of screening tests for cognitive impairment

    Journal of Neurology, Neurosurgery & Psychiatry

    (2007)
  • K. Dautenhahn

    Socially intelligent robots: Dimensions of human–robot interaction

    Philosophical Transactions of the Royal Society B: Biological Sciences

    (2007)
  • N. Derakshan et al.

    Anxiety, processing efficiency, and cognitive performance: New developments from attentional control theory

    European Psychologist

    (2009)
  • Cited by (44)

    • Effects of different service failure types and recovery strategies on the consumer response mechanism of chatbots

      2022, Technology in Society
      Citation Excerpt :

      However, a high-level AI possesses primary consciousness and can think, understand, and act humanly, achieving autonomous learning to collaborate with humans [21]. The intelligence level of robots has been shown to affect consumer interaction experiences [2,20,21,67]. Therefore, this study uses the level of robot intelligence as a moderating variable to explore the effect of the strength of robot intelligence on consumers' choice of different recovery strategies after a service failure (please refer to Fig. 1).

    • Machine learning-based human-robot interaction in ITS

      2022, Information Processing and Management
      Citation Excerpt :

      Observations in an assembly process showed the performance of the suggested solution. Lorenzo Desideri et al. (Desideri et al., 2019) discussed Emotional Processes in Human-Robot Interaction (EP-HCI). EP-HCI aimed to establish whether an interplay-related emotional process was created through a brief screening evaluation carried out by a robot compared to a skillful physician.

    • Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems

      2021, Personality and Individual Differences
      Citation Excerpt :

      The lack of evaluative threat perceived in an RP may encourage patients to provide more information about their medical condition, compared to disclosure to a human practitioner. Desideri, Ottaviani, Malavasi, di Marzio and Bonifacci (2019) found that individuals with higher trait anxiety reported higher negative affect during a cognitive assessment interview by a human than those interacting with a robot. A different perspective comes from work on the uncanny valley, the tendency for near-human objects to produce feelings of creepiness and revulsion.

    View all citing articles on Scopus
    View full text