Abstract
Artificial intelligence (AI) and robots have the potential to revolutionize society, with impacts ranging from the broadest reaches of industry and policy to the minutiae of daily life. The extent to which AI-based technologies can bring benefits to human society depends on how people perceive them––folk beliefs of AI and robots. The present paper aims to gain insights into people’s perspectives on artificial intelligence and robots by examining their folk beliefs. In Study 1, we explored folk beliefs regarding general artificial intelligence and robots using metaphor nomination (Phase 1, N = 99), factor analysis (Phase 2, N = 267), and semantic analysis (Phase 3). Results indicated three primary folk beliefs for AI: the unknown, the assistants, and the machines. For robots, three primary folk beliefs emerged: the assistants, the companions, and the tools. In Study 2, we investigated folk beliefs about robots in various application contexts through free listing (Phase 1, N = 82) and factor analysis (Phase 2, N = 300). Results revealed four folk beliefs for companion robots: companion ability, applicable target, social consequence, and technology. Additionally, four folk beliefs emerged for education robots: educational ability, advantage, disadvantage, and technology, while medical robots were associated with five folk beliefs: medical ability, advancement, social consequence, disadvantage, and technology. This research is the first step in examining how ordinary people conceptualize artificial intelligence and robots through folk theories, unveiling several directions for future research reference. Our findings also revealed that lay people’s perceptions of artificial intelligence and robots are shaped by social cognitive processes. This also implies that the methods of folk theories can be utilized to investigate people’s social cognitive processes. The current study carries practical significance for the designers and manufacturers of AI and robots, guiding aspects such as the professional capabilities of artificial intelligence and robots, potential negative social consequences, and the needs of specific user groups.
Access this article
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The authors will share data from the study upon reasonable request to the corresponding author.
References
Glikson E, Woolley AW (2020) Human trust in artificial intelligence: Review of empirical research. Acad Manag Ann 14(2):627–660. https://doi.org/10.5465/annals.2018.0057
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25(1):44–56. https://doi.org/10.1038/s41591-018-0300-7
Kozyreva A, Lorenz-Spreen P, Hertwig R, Lewandowsky S, Herzog SM (2021) Public attitudes towards algorithmic personalization and use of personal data online: Evidence from Germany, Great Britain, and the United States. Humanit Soc Sci 8(1):1–11. https://doi.org/10.1057/s41599-021-00787-w
Agomuoh F, Larsen L (2023) ChatGPT: How to use the AI chatbot that’s changing everything. https://www.digitaltrends.com/computing/how-to-use-openai-chatgpt-text-generation-chatbot/. Accessed June 11 2023
Xu LY, Yu F (2020) Factors that influence robot acceptance. Chin Sci Bull 65(6):496–510. https://doi.org/10.1360/TB-2019-0136
Heerink M, Kröse B, Evers V, Wielinga B (2010) Relating conversational expressiveness to social presence and acceptance of an assistive social robot. Virtual Real 14:77–84. https://doi.org/10.1007/s10055-009-0142-1
Shin DH, Choo H (2011) Modeling the acceptance of socially interactive robotics: Social presence in human–robot interaction. Interact Stud 12(3):430–460. https://doi.org/10.1075/is.12.3.04shi
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed June 11 2023
Cadario R, Longoni C, Morewedge CK (2021) Understanding, explaining, and utilizing medical artificial intelligence. Nat Hum Behav 5(12):1636–1642. https://doi.org/10.1038/s41562-021-01146-0
Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput 100:48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008
Piçarra N, Giger JC, Pochwatko G, Gonçalves G (2016) Making sense of social robots: A structural analysis of the layperson’s social representation of robots. Eur Rev Appl Psychol 66(6):277–289. https://doi.org/10.1016/j.erap.2016.07.001
Smith A, Anderson M (2017) Automation in Everyday Life. Washington, DC: Pew Research Center. http://www.pewinternet.org/2017/10/04/automationin-everyday-life/. Accessed June 11 2023
Złotowski J, Sumioka H, Nishio S, Glas DF, Bartneck C, Ishiguro H (2016) Appearance of a robot affects the impact of its behaviour on perceived trustworthiness and empathy. Paladyn J Behav Robot 7(1):55–66. https://doi.org/10.1515/pjbr-2016-0005
Davis RB, Maher CA, Noddings N (1990) Constructivist views of the teaching and learning of mathematics. National Council for the Teaching of Mathematics, Reston
Dewey J (1938) Experience and education. Simon and Schuster, New York
Malle BF (2011) Attribution theories: How people make sense of behavior. In: Chadee D (ed) Theories in social psychology. Wiley Blackwell, Hoboken, pp 72–95
Huang SA, Hancock J, Tong ST (2022) Folk theories of online dating: exploring people’s beliefs about the online dating process and online dating algorithms. Soc Med Soc 8(2):20563051221089560. https://doi.org/10.1177/20563051221089561
Malle BF (1999) How people explain behavior: a new theoretical framework. Pers Soc Psychol Rev 3(1):23–48. https://doi.org/10.1207/s15327957pspr0301_2
Gelman SA, Legare CH (2011) Concepts and folk theories. Annu Rev Anthropol 40:379–398. https://doi.org/10.1146/annurev-anthro-081309-145822
DeVito MA, Birnholtz J, Hancock JT, French M, Liu S (2018) How people form folk theories of social media feeds and what it means for how we study self-presentation. In: 2018 CHI conference on human factors in computing systems, 21–26 April 2018. pp 1–12.
French M, Hancock J (2017) What’s the folk theory? Reasoning about cyber-social systems. SSRN. https://ssrn.com/abstract=2910571. Accessed June 11 2023
Keil FC (2010) The feasibility of folk science. Cogn Sci 34(5):826–862. https://doi.org/10.1111/j.1551-6709.2010.01108.x
Keil FC (2012) Running on empty? How folk science gets by with less. Curr Dir Psychol Sci 21(5):329–334. https://doi.org/10.1177/0963721412453721
Pasquale F (2011) Restoring transparency to automated authority. https://heinonline.org/HOL/LandingPage?handle=hein.journals/jtelhtel9&div=11&id=&page= Accessed June 11 2023
Pasquale F (2015) The black box society: The secret algorithms that control money and information. Harvard University Press, Cambridge
Lakoff G, Johnson M (1980) Metaphors we live by. University of Chicago Press, Chicago
Landau MJ, Meier BP, Keefer LA (2010) A metaphor-enriched social cognition. Psychol Bull 136(6):1045–1067. https://doi.org/10.1037/a0020970
Sease R (2008) Metaphor’s role in the information behavior of humans interacting with computers. Inf Technol Libr 27:9–16. https://doi.org/10.6017/ital.v27i4.3237
Berlin RM, Olson ME, Cano CE, Engel S (1991) Metaphor and psychotherapy. Am J Psychother 45(3):359–367. https://doi.org/10.1176/appi.psychotherapy.1991.45.3.359
Peng KP, Yu F (2012) The Psychophysics of morality: Phenomena. Mech Mean Soc Sci China 204(12):28–45
Sandvig C (2015) Seeing the sort: the aesthetic and industrial Defense of the algorithm. J New Med Caucus 11:35–51
Judge M, Fernando JW, Paladino A, Kashima Y (2020) Folk theories of artifact creation: How intuitions about human labor influence the value of artifacts. Pers Soc Psychol Rev 24(3):195–211. https://doi.org/10.1177/1088868320905763
Manoharan C, De Munck V (2017) The conceptual relationship between love, romantic love, and sex: a free list and prototype study of semantic association. J Mixed Methods Res 11(2):248–265. https://doi.org/10.1177/1558689815602151
Walker LJ, Pitts RC (1998) Naturalistic conceptions of moral maturity. Dev Psychol 34:403–419. https://doi.org/10.1037/0012-1649.34.3.403
Fuchs C, Schreier M, van Osselaer SM (2015) The handmade effect: What’s love got to do with it? J Mark 79(2):98–110. https://doi.org/10.1509/jm.14.0018
May DC, Holler KJ, Bethel CL, Strawderman L, Carruth DW, Usher JM (2017) Survey of factors for the prediction of human comfort with a non-anthropomorphic robot in public spaces. Int J Soc Robot 9:165–180. https://doi.org/10.1007/s12369-016-0390-7
Savela N, Turja T, Oksanen A (2018) Social acceptance of robots in different occupational fields: a systematic literature review. Int J Soc Robot 10(4):493–502. https://doi.org/10.1007/s12369-017-0452-5
Naneva S, Sarda Gou M, Webb TL, Prescott TJ (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12(6):1179–1201 https://doi.org/10.1007/s12369-020-00659-4
Fiske ST, Cuddy AJ, Glick P, Xu J (2002) A model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition. J Pers Soc Psychol 82(6):878–902. https://doi.org/10.1037/0022-3514.82.6.878
Fiske ST, Cuddy AJ, Glick P (2007) Universal dimensions of social cognition: Warmth and competence. Trends Cogn 11(2):77–83. https://doi.org/10.1016/j.tics.2006.11.005
Waytz A, Cacioppo J, Epley N (2010) Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci 5(3):219–232. https://doi.org/10.1177/1745691610369336
Xu LY, Yu F, Wu JH, Han TT, Zhao L (2017) Anthropomorphism: Antecedents and consequences. Adv Psychol Sci 25(11):1942–1954. https://doi.org/10.3724/SP.J.1042.2017.01942
Montoya RM, Horton RS, Kirchner J (2008) Is actual similarity necessary for attraction? A meta-analysis of actual and perceived similarity. J Soc Pers Relat 25(6):889–922. https://doi.org/10.1177/0265407508096700
Murray SL, Holmes JG, Bellavia G, Griffin DW, Dolderman D (2002) Kindred spirits? The benefits of egocentrism in close relationships. J Pers Soc Psychol 82(4):563–581. https://doi.org/10.1037/0022-3514.82.4.563
Prisbell M, Andersen JF (1980) The importance of perceived homophily, level of uncertainty, feeling good, safety, and self-disclosure in interpersonal relationships. Commun Q 28(3):22–33. https://doi.org/10.1080/01463378009369372
Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 181:21–34. https://doi.org/10.1016/j.cognition.2018.08.003
Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science 352(6293):1573–1576. https://doi.org/10.1126/science.aaf26
Mosakas K (2021) On the moral status of social robots: considering the consciousness criterion. AI Soc 36:429–443. https://doi.org/10.1007/s00146-020-01002-1
Lovell M (2006) Caring for the elderly: changing perceptions and attitudes. J Vasc Nurs 24(1):22–26. https://doi.org/10.1016/j.jvn.2005.11.001
Felfe C, Lalive R (2018) Does early child care affect children’s development? J Public Econ 159:33–53. https://doi.org/10.1016/j.jpubeco.2018.01.014
Granulo A, Fuchs C, Puntoni S (2019) Psychological reactions to human versus robotic job replacement. Nat Hum Behav 3(10):1062–1069. https://doi.org/10.1038/s41562-019-0670-y
Kiesler S (2005) Fostering common ground in human-robot interaction. In: 14th IEEE International workshop on robot and human interactive communication, 13–15 August 2005. pp 729–734.
Salganik MJ, Levy KE (2015) Wiki surveys: open and quantifiable social data collection. PLoS ONE 10(5):e0123483. https://doi.org/10.1371/journal.pone.0123483
Chen R, Xu P, Song P, Wang M, He J (2019) China has faster pace than Japan in population aging in next 25 years. Biosci Trends 13(4):287–291. https://doi.org/10.5582/bst.2019.01213
Cai Y, Feng W (2021) The social and sociological consequences of China’s one-child policy. Ann Rev Sociol 47:587–606. https://doi.org/10.1146/annurev-soc-090220-032839
Eng A, Chi YK, Gray K (2023) People treat social robots as real social agents. Behavioral Brain Sci 46:e28. https://doi.org/10.1017/S0140525X22001534
Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734. https://doi.org/10.5465/amr.1995.9508080335
Hancock PA, Billings DR, Schaefer KE, Chen JY, De Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527. https://doi.org/10.1177/0018720811417254
Kaplan AD, Kessler TT, Brill JC, Hancock PA (2023) Trust in artificial intelligence: Meta-analytic findings. Hum Factors 65(2):337–359. https://doi.org/10.1177/00187208211013988
Ashrafian H (2015) Artificial intelligence and robot responsibilities: Innovating beyond rights. Sci Eng Ethics 21:317–326. https://doi.org/10.1007/s11948-014-9541-0
Korinek A, & Stiglitz JE (2018) Artificial intelligence and its implications for income distribution and unemployment. In The economics of artificial intelligence: an agenda (pp 349–390). University of Chicago Press
Funding
This work was supported by the National Social Science Foundation of China (Grant No. 20CZX059), and the National Natural Science Foundation of China (Grant No. 72101132).
Author information
Authors and Affiliations
Contributions
LX: Conceptualization; Data curation; Formal analysis; Investigation; Methodology. YZ: Writing—original draft. FY: Conceptualization; Project administration; Supervision; Writing—review & editing. JW: Writing—original draft.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics Approval
The study was conducted in accordance with the Declaration of Helsinki. The studies involving human participants were reviewed and approved by the Ethics Committee of Wuhan University.
Informed Consent
Informed consent was obtained from all participants involved in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xu, L., Zhang, Y., Yu, F. et al. Folk Beliefs of Artificial Intelligence and Robots. Int J of Soc Robotics 16, 429–446 (2024). https://doi.org/10.1007/s12369-024-01097-2
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-024-01097-2