Abstract
This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: (1) Tay did not mean anything by its morally objectionable statements because, in principle, it was not able to think; (2) the controversial content spread by this AI was interpreted incorrectly—not as a mere compilation of meaning (parroting), but as its disclosure; (3) even though chatbots are not members of the symbolic order of spatiotemporal relations of the human world, we treat them in this way in many aspects.
Similar content being viewed by others
References
Adezer O (2016) Microsoft Creates AI Bot—Internet immediately turns it racist. Socialhax. https://socialhax.com/2016/03/24/microsoft-creates-ai-bot-internet-immediately-turns-racist/
Beran O (2018) An attitude towards an artificial soul? Responses to the “Nazi Chatbot”. Philos Investig. https://doi.org/10.1111/phin.12173
Gates B (2017) My advice for China’s students. Gates Notes: the blog of Bill Gates. https://www.gatesnotes.com/Development/Peking-University-Speech
Hern A (2016) Microsoft scrambles to limit PR damage over abusive AI bot Tay. The Guardian.https://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay
Kantrowitz A (2016) Racist Twitter bot went awry due to “Coordinated Effort” By Users, Says Microsoft. BuzzFeed News.https://www.buzzfeed.com/alexkantrowitz/microsoft-blames-chatbots-racist-outburst-on-coordinated-eff?utm_term=.tjV7AwB2n#.dalwR8lez
Lee P (2016) Learning from Tay’s introduction. Official Microsoft Blog. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
Markoff J, Mozour P (2015) For sympathetic ear, more Chinese turn to smartphone program. The New York Times.https://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html
Merton R (1948) The self-fulfilling prophecy. The Antioch review. https://www.jstor.org/stable/4609267
Neff G, Nagy P (2016) Talking to bots: symbiotic agency and the case of Tay. Int J Commun 10:4915–4931
Peregrim J (2005) Kapitoly z analitycké filosofie. Praha, Filosofia
Rankin K (2016) Microsoft Chatbot's Racist tirade proves that Twitter is basically trash. Colorlines. https://www.colorlines.com/articles/microsoft-chatbots-racist-tirade-proves-twitter-basically-trash
Turing AM (1950) Computing machinery and intelligence. Mind 49:433–460
Wang Z (2016) Your next new best friend might be a robot: meet Xiaoice. She’s empathic, caring, and always available—just not human. Nauitilus. https://nautil.us/issue/33/attraction/your-next-new-best-friend-might-be-a-robot
Weller CH (2017) 26-year-old 'echo boomers' are running wild in America—here's what they're all about. Bus Insider. https://www.businessinsider.com/what-are-echo-boomers-2017–3
Yudkowsky E (2008) Cognitive biases potentially affecting judgment of global risks. Oxford University Press, New York
Acknowledgements
This work was supported by Technology Agency of the Czech Republic, grant number: TL01000299. Development of the theoretical-application frameworks for a social change in the reality of the transformation of industry.
Disclaimer
This article uses words or language that is considered profane, vulgar, or offensive by some readers. Owing to the topic studied in this article, quoting offensive language is academically justified but neither the authors, Editor nor the publisher in any way endorse the use of these words or the content of the quotes. Likewise, the quotes do not represent opinions or the opinions of the authors, Editor or the publisher, and we condemn online harassment and offensive language.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zemčík, T. Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?. AI & Soc 36, 361–367 (2021). https://doi.org/10.1007/s00146-020-01053-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01053-4