Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag April 22, 2021

I Don’t Know, Is AI Also Used in Airbags?

An Empirical Study of Folk Concepts and People’s Expectations of Current and Future Artificial Intelligence

  • Fatemeh Alizadeh

    Fatemeh Alizadeh is a research assistant and doctoral candidate at the Department of Information Systems, in particular IT Security and Consumer Informatics at the University of Siegen. After her Bachelor’s degree in Computer Engineering, she continued her studies with a first Master’s degree in Artificial Intelligence and the second in Human-Computer Interaction at the University of Siegen, during which she won the Usability Challenge Award in Germany. Fatemeh’s research interests include developing new and creative communication techniques between users and opaque AI algorithmic systems to provide users with more satisfying and engaging interactions.

    EMAIL logo
    , Gunnar Stevens

    Prof. Dr. Gunnar Stevens is Professor of Information Systems, in particular IT Security, at the University of Siegen. He is also co-director of the Institute for digital consumption (Institut für Verbraucherinformatik) at Bonn-Rhine-Sieg University from Applied Sciences. In 2010 he was awarded the IBM Eclipse Innovation Prize and the IHK Siegen-Wittgenstein Doctoral Prize for his research. He leads several research projects in the field of digital consumption, including mobility, food and housing, user-centered security and privacy, and consumer information systems for the IoT. His current research focus is the impact of AI systems on consumer behavior. He has published more than 100 papers, on e. g., on icom, DuD, ToCHI, CHI, CSCW, WI, JIT, etc.

    and Margarita Esau

    Margarita Esau is a research assistant and doctoral candidate at the Department of Information Systems, in particular IT Security and Consumer Informatics at the University of Siegen. After her studies of Media Technology B. Eng. she worked as a freelancer in media production and design and completed a Master of Science in “Human-Computer Interaction” at the University of Siegen. Her research interests lie in the design of engaging experiences between humans and conversational agents with a particular focus on food practices.

From the journal i-com

Abstract

In 1991, researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI?” from users, who were interacting with artificial intelligence (AI) but did not realize it. After three decades of research, we are still facing the same issue with the unclear understanding of AI among people. The lack of mutual understanding and expectations among AI users and designers and the ineffective interactions with AI that result raises the question of “how AI is generally perceived today?” To address this gap, we conducted 50 semi-structured interviews on perception and expectations of AI. Our results revealed that for most, AI is a dazzling concept that ranges from a simple automated device up to a full controlling agent and a self-learning superpower. We explain how these folk concepts shape users’ expectations when interacting with AI and envisioning its current and future state.

1 Introduction

In 1991, several artificial intelligence (AI)–enabled prototypes of higher educational programs were successfully used at the center for the Learning Sciences of Carnegie Mellon University. Researchers were thus frequently confronted with the puzzling question of “where is AI?” by the confused users, who did not recognize AI in the interaction [44]. Schank considered the mismatch between the prototypes and the then-current users’ definition of AI to be the main reason for this phenomenon and thus analyzed the different viewpoints on AI, distinguishing among the following four groups [44]:

  1. AI means magic bullets: A machine is intelligent if it builds unanticipated connections.

  2. AI means inference engines: A machine is intelligent if it turns experts’ knowledge into rules.

  3. AI means exceeding the existing: AI means getting a machine to do something you did not think a machine could do (the “gee whiz” view). A machine is intelligent if it completes a task that no machine has ever accomplished before.

  4. AI means machines that can learn: A machine is intelligent if it can learn by itself.

In the past three decades, AI has attracted increasing attention from both researchers and the industry leaders [9], [19], [20], [53]. Nevertheless, regarding the public perception of AI, several studies have shown that even today, users do not realize that they are interacting with an AI-enabled technology [3], [8], [13], [14], [31], [54]. What makes this the problem worse is that defining AI is confusing, not only for the users with no computational background, but even for the researchers and AI practitioners [31], [43]. This is due, among other factors, to the evolution of the term over time and to the fact that it has never represented a single specific technology in a single specific time period [13], [20]. As a result, the current limited perception, misconceptions, and unrealistic expectations of AI (e. g., the superhuman fallacy) [7], [31] not only lead to frustration due to users’ unmet desires, but also prevent effective interaction and collaboration with AI [15], [31].

Individuals create informal theories (“folk theories”) that are not necessarily accurate or realistic in order to be able to perceive how a system works and to interact with it [12]. With the widespread use of recommender systems, collaborative filtering, and personalized services, there is an undeniable need for the active interaction of users with these intelligent agents [12], [26]. However, the misconceptions and limited folk theories lead to active but not effective interactions and cause frustration among users [31]. A huge body of research has focused on explainable AI (XAI) [2] to support users’ understanding of an AI-based system and combat incomplete mental models [22], [30], [41]. However, previous scholars have neglected people’s pre-existing folk theories [30], and some have admitted that the results that they reported on users’ perceptions were fragmented [52].

To help close this gap, we take a step back and investigate the current public perception of AI. To this end, we consider “users’ perceived AI” as an evolving phenomenon and investigate its aspects and dimensions in comparison with the perception of AI held by its designers. The key contribution of this work is updating the 30-year-old study of Schank on people’s understanding of general AI. For this aim, we report and discuss thematic analysis of 50 in-depth interviews with the users and show how people’s perceptions of the current and future state of AI are shaped by various interconnected folk concepts of AI’s definition, capabilities, and meaningful use cases. Understanding users’ pre-perceptions and folk theories will not only support designing explainable AI-embed systems but will also enable us to evaluate these systems more effectively.

2 Research Background

This study is grounded in three research areas, namely, the previous research on public acceptance and awareness of AI-enabled systems; the design of more explainable AI-systems; and people’s folk concepts, theories, and mental models of how a specific AI-enabled system (e. g., intelligent agents or internet of things) works. Therefore, we first outline previous research on the terms related to public understanding of AI, namely, acceptance, trust, and awareness. We then move to clarify the necessity of more explainable systems to support users’ interaction with AI, and finally, we argue for the distributed investigation of public informal theories on AI systems within the realm of four AI-enabled systems and services.

2.1 Public Acceptance and Awareness of AI

If we take a look at several large-scale reports on public acceptance and trust regarding AI-enabled technologies, we will recognize a trend of increasing positive reactions towards AI. To begin with, in a survey with over 2000 respondents conducted in 2015 by the British Science Association, one-third of the respondents considered the rise of AI to be a danger for mankind [35]. In 2017, although some researchers indicated that users still felt uncomfortable with AI making decisions on their behalf, the majority of the 27,901 interviewed EU citizens had a generally positive view of robots and AI [22], [41], [48]. In more recent surveys, we will see even greater support for future AI developments among users [8].

In addition to the terms trust and acceptance, AI awareness has also been discussed in the context of perception of AI. A study from HubSpot of over 1,400 consumers worldwide revealed that 63 % claimed that they did not use AI tools while actually using them without being aware of it [3]. Pegasystems confirmed these findings in a study of 5000 consumers, of whom only 34 % agreed that they had previously used AI technology but of whom 84 % actually had [54]. Furthermore, researchers at the Center for the Governance of AI at the University of Oxford found that the majority of users believed that virtual assistants, smart speakers, driverless cars, social robots, and autonomous drones use AI but that Facebook photo tagging, Google Search, Netflix or Amazon recommendations, or Google Translate do not use AI [52]. The results from a Northstar survey of 3804 consumers in 2020 also indicate a significant difference in recognition between “invisible AI” (i. e., AI algorithms behind the scenes) and “visible AI” (i. e., tangible AI devices) [8]. Here, 90 % of the respondents knew that voice assistants are AI enabled, but almost one-third did not consider social media to be an AI-based technology. Davies argued for the dependency of public awareness of AI on the visibility of its application [8]. Nonetheless, their study shows that users do not define AI as humanoid and know the difference between sci-fi and reality [8].

By distinguishing between public perception of AI and public acceptance, trust and awareness, we aim to contribute to closing the gap in the literature and to study users’ definitions of AI and informal theories on how AI-enabled technologies work.

2.2 Designing Explainable Technology

To move towards a more transparent design of technology and address trust concerns [2], “explainable AI” was introduced as AI systems whose activities can be effortlessly comprehended and examined by humans [21]. Van Lent et al. [49] first used the term in 2004 to explain the behavior of the AI-enabled elements in the simulation applications. Over time, the scholarly community and professionals have paid renewed attention to explainable AI [33].

By refining users’ mental models of AI-enabled systems and resolving their misconceptions, XAI promises more effective performance of the users [41]. However, previous researchers have criticized XAI for resulting in less efficient systems and less flexible and capable output [33], [41]. To address this issue, XAI researchers have argued that explanations are not always necessary and instead introduced specific application domains in which they can bring significant benefits (e. g., healthcare and military) [12], [33], [41]. In this regard, Molnar [41] introduces human-friendly explanations as selected and focused on the abnormal. In other words, he suggests that people do not expect complete explanations that cover a full list of causes, but rather expect them to explain why a system behaved in a way they did not expect [41].

We must bear in mind that users were already interacting with AI-enabled systems long before the relatively new trend of XAI was applied to design [33] and that as the operation of these systems was opaque, they have often developed theories of how they work (folk theories) to plan their interaction with them [10], [13], [14], [31]. Folk theories function as a frame in which user expectations are formed [31]. Therefore, understanding them is not only helpful in designing effective explanations but is also necessary in order to understand users’ assumptions and expectations of the systems and discover more valid explanation use cases.

2.3 Making Sense of Technology

To make sense of technology, users often generate folk theories based on direct experiences and social interactions [14], [31]. These informal theories explain how systems operate and support users in responding to them [31]. Previous studies have thus argued that folk theories act as a window to capture users’ perceptions and assumptions [31]. However, the existing studies on mental models and folk theories in the field of AI are highly distributed [13]. Hence, by dividing earlier work into different AI domains, we provide an overview of prior research on folk theories and their role in understanding users’ perceptions, misconceptions, and expectations.

2.3.1 Content Curation Algorithms and Recommender Systems

Although curating, filtering, and clustering information are processes that are used to shape users’ perception of the world, previous studies have suggested that more than half of users are not aware of content curation [26]. Eslami et al. conducted several studies on users’ perceptions and folk theories regarding data curation algorithms [13], [14], [26]. In a study of 40 Facebook users to understand their perception of the Facebook News Feed curation algorithm, the researchers found that 65 % of users were completely unaware of it. They continued their study by investigating the perceptions of Facebook users and discovered very specific folk theories ranging from curation based on users’ engagement with other accounts or content to the algorithm balancing their friends or content [14]. DeVito et al. pursued the same aim and analyzed 102,827 tweets from a hashtag related to rumors on algorithmic curation on the Twitter timeline (#RIPTwitter) [10]. Their research revealed the abstract and functional folk algorithmic theories that define algorithms as concepts or processes, respectively [10].

For their in-the-field learning, personalized agents are also highly dependent on the participation of active users [25], [26]. Kuhl et al. investigated the interplay between users’ mental models and users’ behavior [25]. Among others, they found that there was no consistent understanding of learning algorithms and users’ mental models varied based on their background and experiences [25]. In a study of a music recommender system, Kulesza et al. [26] also showed that providing users with structural knowledge on the reasoning of their recommender systems improves the soundness of their mental model and increases their active participation in the interaction in order to receive the desired results [26].

2.3.2 Internet of Things (IoT)

Regarding sensor-enabled systems such as the Internet of Things (IoT), users develop folk theories on how this data can be used, rather than on how it is collected in the first place [29], [40]. Rader and Slaker [40] showed that users explain their activity based on their perception of their performed activities and the processed data provided by the interface. In their study on physical activities, users entered data (e. g., age and weight) themselves (so that distance and calories were calculated), however, they did not have a full understanding of their raw data. Therefore, they could not make informed decisions about the collection of their personal data [29].

Misconceptions may lead to unawareness of privacy risks. This is particularly true when information is collected without consent, as in the case of Bluetooth Beacon Systems [51]. Beacons were part of an invisible IoT infrastructure in which the users needed to enter personal data (e. g., location) themselves. Yao et al. [51] used drawing as a methodology to reveal folk theories on how beacon-based systems work. Here, folk theories focused more on the reasoning of the visible data collection. For example, the most common misconception was that these sensors collect and store user information, which they actually did not. Furthermore, users assumed that they initially needed to actively consent to receive location-based information. Finally, most of the users thought that the location such as the store or mall that collected the data also owned the data. As a result, users who decided, consciously or unconsciously, to be part of the sensor-enabled system had little understanding of the actual mechanisms of data collection and processing.

2.3.3 Robots

Humans tend to anthropomorphize computational artifacts to rationalize actions and behavior that they cannot reasonably explain. Inaccurate mental models frequently deceive people, who as a consequence credit autonomous system with more capability and knowledge than they actually have [37]. Those mental models are influenced by appearance and physical attributes [24], dialogue, personality traits, language, and origin [37] and lack a clear understanding of mechanical and conceptual functionality. These attributes affect the credibility of robots [39]. Powers and Kiesler [39] showed that the robots’ facial features, such as the dimensions of the forehead and chin, impact the perception of intelligence. Hence, people developed high expectations that were not fulfilled and harmed their collaboration and relationship building, which require trust in autonomous systems. Anthropomorphism can support users in their approach and reactions to the robot and prevent initial rejection. However, it might also lead to misconceptions, which have detrimental consequences for the outcome and even endanger human lives, such as in military settings or critical workspaces [37]. Therefore, humans need a clear and accurate understanding beyond anthropomorphism of how robots collect and process data and make decisions.

2.3.4 Conversational Agents

Besides anthropomorphism of conversational agents (CA) [27], an extensive study in HCI explored voice Interaction and the adoption of domestic CA. In particular, some studies have investigated the perception and understanding of the attributed intelligence [11], [18], [27], [29], [50]. Researchers used methods of drawing [27], [29], [50] interactive tasks [11], [18], [27], and interviews [11], [18] to explore the reasoning and explanations of system behavior that children of different ages create. Xu and Warschauer [50] showed that children used behavioral references like listening and talking to justify cognitive properties, reciprocity as an indicator of psychological properties, and biological references like mechanical causality or fantasy reasoning to explain behavioral properties. They tended to allocate CA rather to a continuum between humans and artifacts than to a distinct category. Especially younger children who were less aware of the underlying concepts attempted to make sense of computational artifacts by personifying them. When children acknowledged the devices to be more intelligent than themselves [18], [50], they were more likely to trust and believe the information provided, which contributed to improved learning [18]. To examine the state of intelligence or consciousness, the study of Druga et al. [11] indicated that younger children asked personal questions such as “what is your favorite color?” and older children asked the device to perform actions of which they knew humans were capable. With prior experience with technology or an engineering background, more thoughtful reasoning was applied [29]. Nonetheless, voice and tone effected the perceived friendliness [11], and available input modalities communicated the expected level of intelligence [11], [18]. To enable meaningful interaction and collaboration, design decisions for the influencing factors [29] should aim to meet users’ expectations or communicate the actual capabilities of CA transparently.

3 Research Setting

In this section, we explain our research method and analysis technique. We start by presenting our research questions and data acquisition approach, whereafter we introduce our research sample and the implementation of the research method in practice. Finally, we explain our data coding and analysis technique.

As stressed by Blumer, “people act towards things (such as physical objects, people as well as abstract ideas) on the basis of meanings they ascribe to them” [6]. Hence, if we want to understand how and why people interact with AI, we should also understand what meanings people ascribe to AI. We call these meanings folk concepts, as this term expresses the folk- or ethno-conceptualization of the subject matter. This concept is closely related to the idea of folk theories [13], [26], but we try to avoid the latter term to avoid the impression that folk concepts are reflected, systematic, and consistent theories analogous in their logic to scientific theories. Folk concepts serve more as sensitizing concepts used as a resource to frame and make sense when interpreting and interacting with or talking about something. Our study had two primary research questions:

  1. (R1) What meanings do people ascribe to AI?

  2. (R2) What expectations do people have when they ascribe AI to something?

When it comes to folk theories and mental models, as well as people’s assumptions about how a complex system works, in-depth interviews have previously shown a great potential [12], [14], [34], [51]. Hence, with the aim of understanding people’s perceptions of AI, we applied a semi-structured interview method to gather a rich dataset and establish a research framework towards user-centered AI. The data corpus was collected within an undergraduate course on explainable AI, in which we asked the students to conduct interviews using a semi-structured guideline that was collectively developed. Along with the demographic information, the guideline included the four following questions:
  1. A friend of yours asks you what AI is: How would you explain AI to him/her?

Concepts can be described in various ways. A common method is to provide an intentional definition [36], in which a concept is approached by listing essential features (e. g., a car has an engine), characterizing it by metaphors (e. g., a car is an iron-horse), or drawing analogies (e. g., the car is like a carriage). The aim of the question above was to elicit such an intentional definition from participants expressing their understandings of AI.
  1. Can you name three examples of artificial intelligence?

Another way to explain a concept is to provide an extensional definition [36], naming examples that represent the concept. The question above aimed to provoke such extensional definitions. In the case of a complex issue, it is sometimes easier for people to explain a concept with the aid of an example. In addition, as normally there is no label such as “this is AI” or “AI inside,” the answers to this question help to understand what people perceive as AI from their own perspective.
  1. Have you ever had conversations with others (family/friends/colleagues) about AI? What did you talk about?

As mentioned by Blumer, the meaning “is derived from, or arises out of, the social interaction that one has with others and the society” [6]. This view is also reflected in Forlizzi and Battarbee’s notion about co-experience [4], which “takes place as experiences are created together or shared with others.” As they stress, “People find certain experiences worth sharing and ‘lift them up’ to shared attention.” With this question, we aimed to shed light on the co-experience of AI, especially what people find so exciting about the idea of AI that makes it worth elevating and sharing with others.
  1. If we tell you that AI is embedded into these objects, what would you expect from them? a) a door; b) a bank account.

When we interact with something new, we apprehend the new based on our existing idea and apply the new to this idea. Garfinkel [17] used this necessity in his breaching experiment to make ethno-methods and social norms, typically implicit and taken for granted, visible. In this spirit, we also wanted to elicit and provoke deeper, generative folk concepts of AI by asking the participant what happens if AI were to be embodied in two known things. Although access control (either to abstract data or physical properties) plays a crucial role in both of the examples, by choosing one visible and familiar object and one rather abstract concept, we tried to compare people’s expectations based on the visibility of AI and on previous experiences with it. To elaborate, people have already experienced automatic doors and could relate more easily to the concept of a smart home, whereas a smart bank account leaves more space for imagination.
  1. If we told you that AI will disappear tomorrow, what do you think the consequences would be?

One of the main reasons that people do not recognize AI in technologies is that AI technologies have already ingrained themselves in our everyday life. Therefore, there is a good chance that we are already so integrated into the world of AI that we take it for granted [42]. To grasp people’s understanding of the role of AI in everyday technologies, we confronted them with an imaginary scenario, in which AI vanishes from our world. We tried to formulate and ask the question in a neutral matter, so that the participants felt free to take a position on the current and future impact of AI in their life.

To ensure the quality of our outcome, we trained the researchers to conduct semi-structured, in-depth interviews and explained the idea behind the convenience sampling method [1]. They interviewed people in their social network who had experience with “smart” technologies such as voice assistants, fitness trackers, social media, robo-advisors and recommender systems (e. g., in YouTube or Netflix). Each researcher conducted three interviews, but we did not consider 25 of them due to the low quality (e. g., strongly noisy recordings, suggestive and closed questions, lack of participants’ motivation to elaborate on their ideas, etc.). Overall, the final data corpus consists of in-depth interviews with 50 people (28 M, 22 F). The majority of the participants were between 20 and 30 years old (min 19, max 57, age average 28), had an educational background (min. B. A.), and were familiar with the daily use of technology. The interviews were conducted remotely (either via video or audio call) and took 20–30 minutes. With the consent of the participants, the interviews were audio-recorded and transcribed.

Our analysis was based on the German interviews, and we used MaxQDA as a supportive tool. By taking a thematic analysis approach [47], each interview was in-vivo coded and analyzed by two different researchers. The codes were then discussed in interpretation sessions using a narrative analysis approach. We translated the statement into English only to quote them in this paper.

4 Findings

In this section, we provide the results of our data coding and analysis from four different aspects, namely, AI’s conceptualizations, competencies, future and current impacts and expectations related to it.

4.1 What Is AI: A Thing or Characteristic or Set of Instructions?

Conversational openings are of special interest, as they provide the context framing for what follows [16], [45]. Metaphorically speaking, the opening prepares the stage on or the guard rails within which the narrative can unfold. Therefore, when asking for respondents’ definitions of AI, we particularly noted the initial sequence. Most of the participants chose a relatively broad and vague framing (such as referring to AI as something, a thing, a system, or an entity), while some were more focused, describing AI as a device, a brain, or a computer. These ambiguities, vagueness, and difficulties in the definition of the term do not indicate a restricted or defective understanding on the part of the respondents, but are, in our view, well founded. Artificial intelligence is something that only presents itself to the user through its embodiment and its performance; it exists not as its behavior itself, but rather in what that has produced the behavior. Normally, this was described as a property or competency of the entity. However, some mentioned program code, detaching this property from the concrete thing, meaning that the computer itself does not have AI, but that AI manifests only with the correct program, which itself is not AI either. The difficulties people faced locating AI were reflected in their framing of what AI was.

Our brief analysis shows that providing a clear, uniform answer to the question of “What is AI” was tricky for the respondents. Hence, we also asked them to define AI extensionally by giving three examples of it. As expected, participants mainly focused on the thing-ness of AI, naming tangible devices, while the more abstract concept of AI as a specific characteristic or specific instruction were less prominent. For instance, machine learning was rarely listed as an example of AI, whereas Amazon’s Alexa, robots, and self-driving cars, followed by the visible but not tangible examples such as Twitter-bots and recommendations in Amazon or Google, were often mentioned.

4.2 On the Competencies of AI: From Automation to Simulation and Superpower

Despite the vagueness of these openings, they provided the respondents with certain frames to which certain competencies could be assigned. Based on these, we categorized the understanding of AI’s capabilities into six groups, shown in Table 1.

Table 1

Folk concepts of the meaning of AI and the identified categories.

Characteristic Folk concept Examples
Automation AI can support humans by fulfilling a pre-programmed task “A machine that takes over our routine tasks” (P21)
Agency AI can work independently and make decisions for humans “AI can make decisions independently and does not need humans to make decisions for it” (P47)
Like human AI can act and react like humans “A computer that acts human” (P38)
Simulating humans AI can simulate human competencies “Simulation of humans’ intelligence through machines” (P16)
Superpower AI can improve human intelligence, knows everything, and make predictions that surpass human knowledge “AI is the creation of machines that surpass the intelligence of humans” (P37)
Self-learning AI can learn and develop by itself “You feed the computer with one million cat pictures among others. Thus, AI learns what a cat looks like” (P27).

AI is automated: Like P21 (see Table 1), many participants described AI as a technology that operates autonomously with the purpose of fulfilling the interests and ideas of its developers. This view underlines the superiority of humans by emphasizing the never-ending need for human support for AI’s existence and functioning. The view also emphasizes the artificial nature of AI, placing AI in the long history of technical development and understanding it as being similar to the other everyday devices with which we interact. Artificial intelligence does not have its own agency and cannot work independently. “AI is dependent on the implementation of the programs” (P25).

AI has agency: The quote from P47 (see Table 1) indicates the understanding of AI as autonomous in a peculiar way. This becomes evident when we split the phrase into two parts. In the first part, P47 describes AI as a system that makes decisions for humans. We observed several variants of this phrase, such as that AI supports humans in everyday life, AI provides services for humans, etc. In the second part, however, P47 stresses that AI does not need humans and is independent of them, which points out the complex relationship between AI and humans. Artificial intelligence is not just independent, but independent of humans. Various participants also mentioned that independency is not a future stage of AI, but its current status.

AI is just like humans: Participants often described AI with reference to humans. To understand the folk concepts about AI, an understanding of the folk concepts of human beings is thus a prerequisite. This is notable as, for instance, common-sense descriptions of natural things (such as trees, mountains, planets, etc.) as well as technical things (such as houses, cars, toasters, etc.) typically do not involve such a reference. Here, human beings are referred to as a model for AI, which is an image of human beings and in order to conceive of which one needs knowledge of what human beings are. From this standpoint, a technology can be seen as AI if it can pass the Turing Test: “AI has the same behavior as humans and acts and reacts at the same level” (P46). Some participants characterize AI with regard to specific capabilities or properties of humans: “AI means systems that act like humans regarding learning and understanding others” (P44).

AI learns by itself: Half of the participants addressed the competency of AI to learn and improve by itself. It was remarkable that, although we did not ask interviewees about how AI learns by itself directly, they explored this topic themselves. A closer look allows us to analyze this folk concept along three interconnected issues: How learning works, what the sources for learning are, and what the results of learning are:

Learning source and strategy: There are thousands of ways to learn; for example, you can read a book, think about a problem, or become wise through experience. As in the phrase of P27 (see Table 1), most of the interview partners had a computational understanding, in which AI learns from data. Furthermore, some participants, such as P27, pointed out that AI can process large amounts of data, while some did not mention data itself, but patterns as sources of learning: “Computers analyze the patterns of human faces in such great detail and in such large quantities” (P31). In addition to technical vocabulary, many descriptions contained implicit and explicit analogies to human learning, for example, when terms are used, such as learning from experience, learning from mistakes, or learning from the analysis of facts. Such kind of anthropo- or zoo-amorphization can be found in phrases such as “AI uses previous experiences as a basis for shaping connections, learns through experiences and makes connections (example: needle à pricking à pain)” (P7). In this phrase, AI is thought of as a small child or a rat in a Skinner experimental setup that learns by operant conditioning. In a similar vein, P10, P15, and P17 referred to the common strategy of learning from mistakes: “The program itself learns from errors and improves itself” (P10). Sometimes the interviews simply referred to the ability of AI to learn without explaining how this learning takes place: “AI is the ability to learn, conclude” (P43).

Table 2

Folk concepts of an AI-enabled objects and the identified categories.

Expectations Folk concept Smart door Smart bank account
Automation AI-enabled means working automatically Automatic opening and closing of the door or based on time” (P4) “It should pay my bills automatically” (P5)
Agency AI-enabled means having the agency to make decisions independently “The door should be able to receive parcels” (P11) “I would expect a stop on spending, so that the artificial intelligence says, now you have spent up to this amount, please don’t continue” (P29)
Personalized services AI-enabled leads to the capability of personalized interaction “The door should clearly be personalized for me” (P30) “It can learn my behavior and adjust its services for example by finding the ideal investment” (P19)
Smart security AI-enabled means protecting users’ security in a smart way “The door should keep me safe from the dangers outside” (P15) “Security of my money is the most important thing for me. So, the trust in AI must be stablished first” (P34)
Intelligent user recognition AI-enabled means recognizing people in a smart way “The door recognizes people and only those who are known are allowed to open it” (P21) “AI can be embedded, so that I can withdraw money with face recognition” (P3)

Learning outcomes: “They can recognize faces relatively well and reproduce them digitally” (P31). Unlike learning strategies and sources, learning outcomes were only mentioned vaguely and could not be easily categorized into groups. We can approximately divide descriptions to those in which the learning goal and scope are quite precisely defined, those in which they are quite broadly described, and those in which they are undefined. The first category includes description in which specific goals such as speech recognition and image recognition are used as paradigmatic examples to explain AI learning. The second group includes descriptions in which AI learning is only described in very general terms, such as “AI learns through experience and makes connections” (P7) or “AI is a group of intensities that can learn from data and produce the appropriate output from input and continue to improve.” (P1). In such a folk concept, it seems that specific goals or purposes play a small role whether a machine can be considered self-learning. Instead, learning seems to present a general competency of AI that is in principle not limited to a specific area.

AI simulates human intelligence: As the Turing test also suggests, the intent and ultimate goal of AI is to reach a human-level in its performances. “AI attempts to transfer human intelligence to machines to promote their learning ability” (P23). Here, AI is perceived as an infant in its early development, moving towards reaching the human level someday: “AI is a digital brain in a computer that learns behavior like a toddler” (P18). Therefore, unlike the responders with the “just like humans” point of view, P18 does not believe that AI can act and react like humans or pass the Turing test, but rather is approaching achievement of these goals in the future.

AI is a superpower: Various participants, such as P37 (see Table 1), assumed that AI has a super-intelligence that surpasses that of humans. This superintelligence not only knows the answer of any possible inquiry but also can predict the future: “AI draws conclusions from past events about what will happen in the future and predicts beyond our knowledge” (P20). One reason that AI is understood as a superpower is its unlimited data processing and storage capacity, seen, for example, in the phrase of P4: “Artificial intelligence is when algorithms have unlimited knowledge” (P4). Superintelligence is also grounded in the super-learning competency of AI, which was mentioned by the respondents who perceived AI not as a predefined code, but as a system that can generate unexpected results: “AI means self-learning systems that generate outputs through the corresponding inputs, which are not expected” (P1). The idea of a superpower places high demands on AI, so that one can ask whether today’s systems meet these requirements. P46 posed this question and answered that these capabilities do not exist in the today’s technologies: “AI? It does not exist yet. We call them smart but they are not really smart” (P46).

4.3 On the Expectations of People: When Things Become AI

The question about what happens when things become AI helps us shed more light into the current beliefs of people and uncover how people apply their often-implicit ideas of AI and what their latent expectations are. We grouped the answers into four categories (see Table 2). The analysis reveals that the folk expectations of AI are shaped by the folk concepts about what AI is (see Table 1). Yet, how the concepts are enacted and interpreted depends on the concrete scenario and application area.

Automation: Half of the respondents mentioned that they would expect a smart door to automatically open and close (one representative is P4; see Table 2), and two believed that this feature alone is an indicator of AI: “I expect that the door opens and closes automatically” (P50). Six others added some sensors to their automatic door and introduced the intelligent door in this way: “It should recognize the movement and closes and opens based on its recognition” (P20). Therefore, we can say that some users do not see any differences between intelligent and automatic devices. P23 solved this problem by introducing different levels of intelligence: “The most primitive is the motion sensor, which detects when you approach the door. One step further would be image/face recognition. On the next level, the door could be connected to a smart home system, so the system could detect an intruder in the house” (P23). Interestingly, in the example of the bank account, only three respondents mentioned the automatic payment of the bills: “A bank account, that can buy things independently or transfer money” (P27). This might be due to the higher risk that our participants associate with the monetary services: “A bank that helps me save and transfers my monthly repeated bills automatically” (P36).

Agency: Regarding intelligent doors, only two respondents (P47 and P11; see Table 2) expect more comprehensive agency beyond a smart automatic door opening. P47 described how “The door should perceive its environment and react accordingly; for example, if needed, it should make less noise while opening and closing” (P47). Although agency did not play an important role in users’ common expectations from a door, it was the most common expectation they held of an intelligent bank account. It is worth mentioning that some users struggled with whether or not the embedded AI in their bank account should be trusted to have its own agency. Therefore, they had different expectations of the levels of control that the bank account would possess.

Improved comfort: AI was seen as smart enough to eliminate the need for manual authentication: “No bank card is required. For example, with a chip in your hand you can access your account” (P37).

Replace service worker for simple tasks: The participants said that AI could start to compete with bank employees and take over simple tasks: “The possibility to open new accounts without having to communicate with the bank employee” (P43).

Replace service worker for complex tasks: The same was true for complex tasks. Those most commonly mentioned were the bank account serving as personal investor and financial advisor. Here, the smart bank account is able to consult and suggest a financial approach to the user but is not allowed to make decisions on its own: “The smart bank account should evaluate my expenditure, suggest a consultancy and support me with my account management” (P9).

Making decisions automatically for the client: The fourth level covers the expectations in which the bank account makes decisions on its own. For instance, it prevents users from spending money irrationally by not permitting such transactions (P29; see Table 2). On this level, the AI is transparent with the users and lets them know why the decision was made. This decision might not be aligned with the users’ present interests but would fulfill users’ long-term goals.

Making decisions automatically even against the client: Users placed AI in a position in which it has full control, making decisions even against their interest. For instance, P13 envisions a scenario in which AI has full agency for better or worse: “For example if the account often has a negative balance, this could affect the creditworthiness of the users and they cannot receive loans” (P13).

Personalized services: In the case of the bank account, about one-third (16 of 50) expected personized financial services. For instance, the bank account was expected to play the role of a financial coach by analyzing transactions and payment patterns and providing users with plans for saving or investment: “A bank account can over time make recommendations for potential savings or investment opportunities” (P17). Others also mentioned improved feedback about spending patterns: “It analyses transactions on a monthly/yearly basis and evaluates how much money was spent on what and whether it differs at certain intervals” (P23). In the case of the intelligent door, this rate was much lower. Only four participants mentioned to expect personalized services from their intelligent door. P1 and P7 explained these personalized services as recognition of the behavioral pattern/schedules and customization of the door accordingly: “The door learns habits and can recognize through behavior patterns whether to lock or unlock” (P1). In addition, P9 also expected an intelligent door to begin communicating with people in a manner similar to that of a friendly porter: “I would expect a personal greeting” (P9).

Smart Security: Security was among the most common expectations for our participants. Although it made a greater contribution to the example of the smart bank account, four of the respondents also expected it from the intelligent door: “Security is the top priority here, the door should not only be protected against robbery. It should also be able to detect if an attempt is made to hack it, because anything that has to do with technology can be hacked and is very vulnerable” (P33). P15 (see Table 2) did not address the danger clearly, but in general. Here again, the expectations from a bank account were higher. Two participants even expressed their reluctance to answer the question because of distrust in AI: “If AI is embedded in a bank account, then I will have problem with trusting it” (P16) and “In a bank account, I want AI to be embedded as low as possible” (P25). P34 (see Table 2) addressed this problem by setting a condition for a smart bank account. He believed that before anything else, a smart bank account should establish trust. Some others focused on the new opportunities that a smart bank account can provide to detect phishing attacks and protect against them: “A smart bank account can keep my bank details safe and can detect whether a website is fake or not when I want to make a transaction” (P46).

Intelligent user recognition: The importance of security also covers access control, authentication, and authorization. In the example of the door, more than half of the users mentioned face or finger recognition for access as the expectation. This group includes P21, whose idea can be seen on Table 2. For a smart bank account, however, only five users mentioned expecting this capability. This might be because of two reasons. For a door, unlike a bank account, face/voice recognition is not a common feature, as many bank applications already provide this feature. Therefore, one can conclude that users expect a smart device to be capable of more uncommon actions. However, as security played the most important role in users’ concepts of a smart bank account, one can think of another possible reason, namely, that the users are still skeptical about this feature and would regard a smart bank account as too intelligent to risk the security of its users in exchange for offering more usability to them.

As we saw from the results, a higher priority was placed on some attributes such as automation and face/voice recognition for the smart door, while others such as agency and security were of a greater importance for the smart bank account. One reason might be the previous experiences of the particular participant with the object and the possible difficulties encountered. Some of the participants, such as P13, even referred to previous experiences with the object: “When I think about a door and artificial intelligence, a revolving door immediately comes into my head, as is so often the case in large shopping malls that peoples’ speed automatically slows down and vice versa” (P13). In general, the expectations from a smart bank account were higher than those of a smart door, which in most of the cases would only automatically open and close. This might be due to the abstract nature of a bank account, to which users can ascribe a human role of coach or adviser more easily than to a physical object such as a door, which is obviously not a human.

4.4 On the Impact of AI: If AI Disappears

When asked what would happen if AI were to vanish tomorrow, the respondents had to think of the technologies and services in which AI was embedded meaningfully. Our observations showed that it was a complicated matter for the respondents, some of whom, such as P30, even asked the interviewer whether they were right about AI use cases: “AI makes our everyday lives easier, for example, for driving, we have driver assistants, I don’t know, is AI also used in airbags?” (P30). Our analysis revealed that this complexity pushed respondents towards envisioning the impact of the loss of AI mainly based on the inputs they had from personal conversations, as well as from mass and social media. We categorized the mentioned impacts of AI into three following groups:

Social impact: “Disappearance of AI will cause a chaos in the world. There is definitely more AI in everything than we think” (P39). Several participants thought that AI has spread into all areas of life, so that the loss would lead to severe dislocation. P44, who took a seminar about AI at university and usually talked about the accuracy and simplicity of the roles in which AI has taken over, stated, “AI is boundless. I believe a life without AI is without development, it may not be the primitive life, but of course, there will be negative effects on all areas of life like engineering, medicine, education, agriculture, etc.” (P44).

Economic impact: “If AI disappears then the economy will get paralyzed and we cannot have a normal life anymore” (P18). Some participants, such as P18, stress that the disappearance of AI would have serious economic consequences. P1, for instance, stressed that AI is important for the digital revolution in Germany with regard to the economic progress: “Economic harm because companies like Amazon cannot use their recommender system anymore” (P1).

Personal impact: “I cannot imagine a life without AI. I ask myself, how we could drive a long distance without navigation 20 years ago” (P42). For participants like P42, who cannot even imagine the disappearance of AI, the spectrum of the personal impacts ranges from extreme cuts of personal living to small or even no personal changes: “There won’t be great personal consequences, expect for Siri and Alexa” (P2). P5 detached his personal impact from the impacts on others, saying, “There will be no consequences for average people, only those who have smart homes” (P5).

Do not have AI for better or worse: The respondents typically also expressed a position about the role of AI in today’s world and chose whether to perceive it as a positive or a negative one. In this regard, we identified four positions:

  1. Deplore life without AI: “If AI vanishes that would mean that our work and our life will be more difficult” (P50). About 60 % would deplore a life without AI. A representative of this view is P50, who, believed that AI makes life easier: “If we want to make things easier, we should find more and more applications for AI in them” (P50). The extent of the regret also depends on how AI is understood: “So, if I lose my smart phone today and get back to the old phones without internet connection, then I will lose 20 % of my own identity and 80 % of my social relations” (P46).

  2. AI is Janus-faced: “The fate of humans in the midst of this great upturn in science is terrifying” (P43). Less than 10 % mentioned both negative and positive sides. A representative of this position is P43, who noted, “There are positive and negative sides. Natural disasters happen all over the world, AI saves humanity by improving the methods and strengthening the means to cope with them. But if AI disappears, then the unemployment rate will decrease” (P43). This quote highlights the complex relationship between humans and AI: humans as the creators of AI, a superpower that begins to compete with them. This view was shared by others, who considered AI to be a sign of a brighter future that comes with a risk. There were also other double evaluations due to the consideration of AI in different areas. For instance, P6 differentiated between the social and the personal areas: “Without AI, the world would fall several years into the past. But for me personally, I will have more personal interactions and the social relationships won’t be digital like now” (P6).

  3. Welcoming life without AI: More than 10 % of the participants were skeptical about AI and had some serious concerns and thus welcomed life without AI. One example is P26, who had several discussions with her husband about AI and the fear of being manipulated by it: “If AI vanishes, I will feel relived, although many things would be done manually again” (P26). Similarly, P22 had a conversation with her daughter about the latter’s homework on the ethical issues of AI in the area of self-driving cars: “If there is no AI tomorrow then the ethical issues, we have about the self-driving cars will be solved” (P22). The extreme case was presented by P47, who referred to AI in absolutely negative terms: “If AI vanished, the most important consequence would be more jobs for everyone” (P47).

  4. Being incurious about AI: There was also a group (about 20 %) of the participants who were incurious, as they found AI to be not essential. They either believed that the disappearance of AI would bring about no specific changes or that the changes would be exceedingly small. A representative of this group was P23, who mentioned that many find AI not to be a relevant subject and avoid discuss it: “It would be a loss if we don’t have this technology anymore, but it’s not essential. Because until now, there aren’t any huge dependencies to AI. The economy can grow further with or without AI” (P23).

5 Discussion

In this segment, we discuss the essence of our results with regards to their potential to support AI practitioners and XAI researchers in achieving more effective and meaningful interactions of the users with AI. We thus outline users’ perceptions and expectations of AI through the identified folk concepts and the set of user demands. Based on our case study, we also discuss the possible reasons that AI has remained imperceptible to its users after three decades.

5.1 Meaning Ascribed to AI: The Interconnected Folk Concepts

Regarding our first research question, examining what meaning people ascribe to AI, our study updates the 30-year-old study of Schank [44]. Most of the definitions of AI noted by Schank were also among the folk concepts that we identified. For instance, users’ definition of AI as magic bullets is closely connected to the idea of AI being a superpower. In addition, the view of AI in terms of learning machines was highly prominent in our study. Complementing the work of Schank, our research shows that people have different views regarding the sources, strategies, and outcomes of the self-learning AI systems. This ranges from a highly technical understanding of machine learning to metaphorical notions and analogies with human learning. In contrast to Schank’s findings, the understanding of AI as an inference engine to turn experts’ knowledge into rules played only a subordinate role in our findings. This might reflect the prevailing concerns of the time when Schank wrote versus today, for while expert systems were an important topic 30 years ago, they are rarely present in today’s AI discourse.

When asked for three examples of AI, our respondents referred to the visible and tangible artefacts (such as Alexa, robots, and self-driving cars), which confirms the results from the research done by Northstar [8] and shows that people recognize visible artefacts as AI more easily. However, this result should be handled carefully, because here, the thing-ness of the conceptions of AI seems to be shaped by the nature of the question. In other words, when the question was formulated as “how do you explain AI?” the participants did not provide an extensional definition, but instead more abstract and conceptional answers (such as AI having agency, simulating human competencies, or being a set of instructions).

Despite having a relatively homogenous sample group (on average, representatives of a young generation of educated German users), our respondents had different understandings of AI and its capabilities. In this regard, our study confirms the previous work done on specific AI systems (such as content curation algorithms, recommender systems, and conversational agents) and the variety of folk theories and misconceptions reported by them [12], [13], [26], [29], [50], [51]. Our study indicates that mental models about specific AI systems are not isolated but shaped by common folk concepts and the grand narratives of AI that machines become superhuman. Moreover, we argue, that folk concepts are interconnected and build upon each other. To elaborate further, people’s perception of AI and AI-enabled systems influences their understanding of its meaningful and essential use cases that helps them evaluate AI’s current and future impacts in their lives.

An example of this interconnectivity can be seen in characterization of the folk concept by the complicated relationship between man and AI. Almost all explanations defined AI with a reference to humans (such as like humans, simulates humans, is independent of humans, is made by humans, etc.). Metaphorically, AI was frequently spoken of in terms of adolescent development, strongly influenced by parents and oriented towards them, while also turning away from them and claiming independence. This complicated relationship leads to a complex cooperation between the two, in which the role each plays is not clear for the users. This complex relationship also pervades the evaluation of AI as a technology that is part of our everyday lives.

5.2 Expectation Ascribed to AI: Design for the Unrealizable?

Design for user expectations presents an important guiding principle in usability engineering (ISO 9421-110) [23]. However, our study shows that people have quite diverse and high expectations of the AI-enabled artefacts such as “AI has the same behavior as humans and acts and reacts at the same level” and ambitious expectations such as “a door that recognizes me and communicates with me based on its knowledge about me” that surpass existing technological capabilities. This phenomenon suggests that people do not have appropriate knowledge when it comes to implementation, which can lead to unrealizable expectations of technology and disappointment when these expectations are not met. This may explain the phenomenon described in the literature of why users are dissatisfied with voice assistance systems such as Alexa, Google Home, and Apple Home pod [5], [32], [38], [46]: although these systems are useful for simple tasks such as turning on the light and playing a music track and work reliably, they cannot meet the high expectations of users.

Therefore, in our next study, we aim to identify this gap between expectations and possibility by using more profound research techniques to grasp folk concepts, theories and mental models of AI. Moreover, based on our study, we recommend that explainable AI should be transparent not only in how the system works, but also in what a user can expect from it. We should also take seriously the insight from research that user experience is shaped not only by the actual interaction, but also occurs before, during, and after use [28]. From this perspective, realistic expectation must be made a part of the advertising of AI-enabled products and services, as well as of the AI-related narratives in science fiction literature and movies.

One other ambiguity for the respondents regarding the current state of AI concerns its users. Is AI only used by big high-tech companies and infrastructure? By tech savvies who live in smart homes and possess self-driving cars? Is it used by the average consumer? Or can anyone be considered an AI user? To address their lack of knowledge in this area, the respondents referred to whatever input they had gathered from others. This might also be the reason for the widely diverging evaluations of AI’s impact and current meaningful use cases among people, which ranged from wider areas such as the economy, medicine, politics, transportation, and so on to narrower use cases, such as in voice assistants or gaming.

5.3 Limitations and Future Work

Our work has certain limitations that should be considered and that can motivate future work. One major limitation regards the representativeness of our sample group and the known problems of self-selection bias. Hence, our findings especially express the folk concepts of a young, well-educated, urban generation of digital natives in western society, in Germany. Less can be said about the folk concepts of elderly people, as well as folk concepts in other societies. This can be improved by using a larger sample with more diversity in terms of technical background and age group, as well as by conducting a similar study in other countries. Deeper and more detailed direct questions could also be used to identify the barrier between automation and simulation or self-learning sources and strategies as well as the extent to which users’ engagement in social conversations impacts their perception and evaluations of the AI’s meaningful use cases.

Therefore, in future, we aim to investigate these folk concepts and theories along with their interconnections and impact, as well as other influencing factors (e. g., sci-fi and social media trends) in greater depth. Our study also encourages the implementation of a more user-friendly XAI by focusing on clear communication of what users should expect from AI-enabled systems. Such transparent AI systems can help us analyze how users’ expectations can be influenced and modified for more satisfying interactions.

6 Conclusion

In this work, we emphasized the importance of understanding AI as it is perceived by people in designing efficient and productive explainable AI systems. By outlining the fragmented previous work on people’s mental models and folk theories and analyzing the results of our in-depth interviews with 50 participants, we presented a thematic analysis that systemized the various perspectives and expectations of people regarding AI and AI-enabled systems. Our results, based on a relatively homogenous sample group of mainly young and well-educated people, suggest that users’ perceptions of the current and future state of AI are not purely dependent on their computational background, but are shaped by various interconnected folk concepts of AI’s definition, capabilities, and meaningful use cases.

Like previous studies [41], we also argue that explanation should not only address users’ needs, but should be adjusted to their understanding. Hence, the design for XAI cannot be done without knowing how people perceive AI and should make what they can expect from it transparent. We also argue that without modifying users’ expectations of the AI-enabled systems, designing for users’ satisfaction will be a losing battle, and encourage XAI practicians to aim for clearer communication in this regard.

About the authors

Fatemeh Alizadeh

Fatemeh Alizadeh is a research assistant and doctoral candidate at the Department of Information Systems, in particular IT Security and Consumer Informatics at the University of Siegen. After her Bachelor’s degree in Computer Engineering, she continued her studies with a first Master’s degree in Artificial Intelligence and the second in Human-Computer Interaction at the University of Siegen, during which she won the Usability Challenge Award in Germany. Fatemeh’s research interests include developing new and creative communication techniques between users and opaque AI algorithmic systems to provide users with more satisfying and engaging interactions.

Gunnar Stevens

Prof. Dr. Gunnar Stevens is Professor of Information Systems, in particular IT Security, at the University of Siegen. He is also co-director of the Institute for digital consumption (Institut für Verbraucherinformatik) at Bonn-Rhine-Sieg University from Applied Sciences. In 2010 he was awarded the IBM Eclipse Innovation Prize and the IHK Siegen-Wittgenstein Doctoral Prize for his research. He leads several research projects in the field of digital consumption, including mobility, food and housing, user-centered security and privacy, and consumer information systems for the IoT. His current research focus is the impact of AI systems on consumer behavior. He has published more than 100 papers, on e. g., on icom, DuD, ToCHI, CHI, CSCW, WI, JIT, etc.

Margarita Esau

Margarita Esau is a research assistant and doctoral candidate at the Department of Information Systems, in particular IT Security and Consumer Informatics at the University of Siegen. After her studies of Media Technology B. Eng. she worked as a freelancer in media production and design and completed a Master of Science in “Human-Computer Interaction” at the University of Siegen. Her research interests lie in the design of engaging experiences between humans and conversational agents with a particular focus on food practices.

References

[1] Acharya, A.S. et al. 2013. Sampling: Why and how of it. Indian Journal of Medical Specialties. 4, 2, 330–333.Search in Google Scholar

[2] Adadi, A. and Berrada, M. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. 6, 52138–52160. DOI:https://doi.org/10.1109/ACCESS.2018.2870052.Search in Google Scholar

[3] An, M. 2017. Artificial Intelligence Is Here – People Just Don’t Realize It.Search in Google Scholar

[4] Battarbee, K. 2003. Defining co-experience. Proceedings of the 2003 international conference on Designing pleasurable products and interfaces (New York, NY, USA, Jun. 2003), 109–113.Search in Google Scholar

[5] Bentley, F. et al. 2018. Understanding the Long-Term Use of Smart Speaker Assistants. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2, 3, 91:1–91:24. DOI:https://doi.org/10.1145/3264901.Search in Google Scholar

[6] Blumer, H. 1986. Symbolic Interactionism.Search in Google Scholar

[7] Boden, M.A., R.P. of C.S.M.A. 2004. The Creative Mind: Myths and Mechanisms. Psychology Press.Search in Google Scholar

[8] Davies, J. et al. 2020. Read the Arm 2020 Global AI Survey. Arm Blueprint.Search in Google Scholar

[9] Deery, M. 2018. The 6 Stages in the Evolution of AI and Customer Experience.Search in Google Scholar

[10] DeVito, M.A. et al. 2017. “Algorithms ruin everything”: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2017), 3163–3174.Search in Google Scholar

[11] Druga, S. et al. 2017. “Hey Google is it OK if I eat you?”: Initial Explorations in Child-Agent Interaction. Proceedings of the 2017 Conference on Interaction Design and Children (New York, NY, USA, Jun. 2017), 595–600.Search in Google Scholar

[12] Eslami, M. et al. 2016. First I “like” it, then I hide it: Folk Theories of Social Feeds. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2016), 2371–2382.Search in Google Scholar

[13] Eslami, M. et al. 2015. “I always assumed that I wasn’t really that close to [her]”: Reasoning about Invisible Algorithms in News Feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2015), 153–162.Search in Google Scholar

[14] Eslami, M. et al. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2019), 1–14.Search in Google Scholar

[15] Fast, E. and Horvitz, E. 2016. Long-Term Trends in the Public Perception of Artificial Intelligence. arXiv:1609.04904 [cs]. (Dec. 2016).Search in Google Scholar

[16] Flick, U. 2014. The SAGE Handbook of Qualitative Data Analysis. SAGE Publications Ltd.Search in Google Scholar

[17] Garfinkel, H. 1991. Studies in Ethnomethodology.Search in Google Scholar

[18] Garg, R. and Sengupta, S. 2020. Conversational Technologies for In-home Learning: Using Co-Design to Understand Children’s and Parents’ Perspectives. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–13.Search in Google Scholar

[19] Gil, Y. and Selman, B. 2019. A 20-Year Community Roadmap for Artificial Intelligence Research in the US. arXiv:1908.02624 [cs]. (Aug. 2019).Search in Google Scholar

[20] GoodAI. 2019. Understanding the public perception of AI. Medium.Search in Google Scholar

[21] Hagras, H. 2018. Toward Human-Understandable, Explainable AI. Computer. 51, 9, 28–36. DOI:https://doi.org/10.1109/MC.2018.3620965.Search in Google Scholar

[22] Hoffman, R.R. et al. 2019. Metrics for Explainable AI: Challenges and Prospects. arXiv:1812.04608 [cs]. (Feb. 2019).Search in Google Scholar

[23] ISO 9241-11:2018(en), Ergonomics of human-system interaction – Part 11: Usability: Definitions and concepts: https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en. Accessed: 2020-09-17.Search in Google Scholar

[24] Kiesler, S. and Goetz, J. 2002. Mental models of robotic assistants. CHI ’02 Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, Apr. 2002), 576–577.Search in Google Scholar

[25] Kuhl, Ni. et al. 2020. Do you comply with AI? – Personalized explanations of learning algorithms and their impact on employees’ compliance behavior. arXiv:2002.08777 [cs]. (Feb. 2020).Search in Google Scholar

[26] Kulesza, T. et al. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2012), 1–10.Search in Google Scholar

[27] Kuzminykh, A. et al. 2020. Genie in the Bottle: Anthropomorphized Perceptions of Conversational Agents. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–13.Search in Google Scholar

[28] Law, E.L.-C. et al. 2009. Understanding, scoping and defining user experience: a survey approach. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2009), 719–728.Search in Google Scholar

[29] Lee, S. et al. 2019. “What does your Agent look like?”: A Drawing Study to Understand Users’ Perceived Persona of Conversational Agent. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2019), 1–6.Search in Google Scholar

[30] Lim, B.Y. et al. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2009), 2119–2128.Search in Google Scholar

[31] Long, D. and Magerko, B. 2020. What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–16.Search in Google Scholar

[32] Luger, E. and Sellen, A. 2016. “Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2016), 5286–5297.Search in Google Scholar

[33] Molnar, C. 2020. Interpretable Machine Learning.Search in Google Scholar

[34] Ngo, T. et al. 2020. Exploring Mental Models for Transparent and Controllable Recommender Systems: A Qualitative Study. Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (New York, NY, USA, Jul. 2020), 183–191.Search in Google Scholar

[35] British Science Association. One in three believe that the rise of artificial intelligence is a threat to humanity. https://www.britishscienceassociation.org/news/rise-of-artificial-intelligence-is-a-threat-to-humanity.Search in Google Scholar

[36] Parry, W.T. and Hacker, E.A. 1991. Aristotelian Logic. SUNY Press.Search in Google Scholar

[37] Phillips, E. et al. 2011. From Tools to Teammates: Toward the Development of Appropriate Mental Models for Intelligent Robots. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 55, 1, 1491–1495. DOI:https://doi.org/10.1177/1071181311551310.Search in Google Scholar

[38] Porcheron, M. et al. 2018. Voice Interfaces in Everyday Life. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2018), 1–12.Search in Google Scholar

[39] Powers, A. and Kiesler, S. 2006. The advisor robot: tracing people’s mental model from a robot’s physical attributes. Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (New York, NY, USA, Mar. 2006), 218–225.Search in Google Scholar

[40] Rader, E. and Slaker, J. 2017. The importance of visibility for folk theories of sensor data. Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017), 257–270.Search in Google Scholar

[41] Ribera, M. and Lapedriza, A. 2019. Can we do better explanations? A proposal of user-centered explainable AI. IUI Workshops (2019).Search in Google Scholar

[42] Russ Are we already taking AI for granted? What’s it got to do with Lasagne, and why should we care? b1creative.Search in Google Scholar

[43] Schank, R.C. 1987. What Is AI, Anyway? AI Magazine. 8, 4, 59. DOI:https://doi.org/10.1609/aimag.v8i4.623.Search in Google Scholar

[44] Schank, R.C. 1991. Where’s the AI? AI Magazine. 12, 4, 38. DOI:https://doi.org/10.1609/aimag.v12i4.917.Search in Google Scholar

[45] Schegloff, E.A. 1968. Sequencing in Conversational Openings1. American Anthropologist. 70, 6, 1075–1095. DOI:https://doi.org/10.1525/aa.1968.70.6.02a00030.Search in Google Scholar

[46] Sciuto, A. et al. 2018. “Hey Alexa, What’s Up?”: A Mixed-Methods Studies of In-Home Conversational Agent Usage. Proceedings of the 2018 Designing Interactive Systems Conference (New York, NY, USA, Jun. 2018), 857–868.Search in Google Scholar

[47] Smith, J.A. 2015. Qualitative Psychology: A Practical Guide to Research Methods. SAGE.Search in Google Scholar

[48] TNS opinion & social at the request of the European Commission 2017. SpecialEurobarometer 460.Search in Google Scholar

[49] Van Lent, M. et al. 2004. An explainable artificial intelligence system for small-unit tactical behavior. Proceedings of the national conference on artificial intelligence (2004), 900–907.Search in Google Scholar

[50] Xu, Y. and Warschauer, M. 2020. What Are You Talking To?: Understanding Children’s Perceptions of Conversational Agents. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–13.Search in Google Scholar

[51] Yao, Y. et al. 2019. Unpacking People’s Understandings of Bluetooth Beacon Systems – A Location-Based IoT Technology.Search in Google Scholar

[52] Zhang, B. and Dafoe, A. 2019. Artificial Intelligence: American Attitudes and Trends. Center for the Governance of AI, Future of Humanity Institute, University of Oxford.Search in Google Scholar

[53] 2016. “Artificial intelligence and life in 2030”, one-hundred-year study on artificial intelligence. Stanford University.Search in Google Scholar

[54] 2019. “What Consumers Really think about AI”. Pegasystems.Search in Google Scholar

Published Online: 2021-04-22
Published in Print: 2021-04-27

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2021-0009/html
Scroll to top button