Skip to main content

Advertisement

Log in

Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?

  • OPEN FORUM
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: (1) Tay did not mean anything by its morally objectionable statements because, in principle, it was not able to think; (2) the controversial content spread by this AI was interpreted incorrectly—not as a mere compilation of meaning (parroting), but as its disclosure; (3) even though chatbots are not members of the symbolic order of spatiotemporal relations of the human world, we treat them in this way in many aspects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Acknowledgements

This work was supported by Technology Agency of the Czech Republic, grant number: TL01000299. Development of the theoretical-application frameworks for a social change in the reality of the transformation of industry.

Disclaimer

This article uses words or language that is considered profane, vulgar, or offensive by some readers. Owing to the topic studied in this article, quoting offensive language is academically justified but neither the authors, Editor nor the publisher in any way endorse the use of these words or the content of the quotes. Likewise, the quotes do not represent opinions or the opinions of the authors, Editor or the publisher, and we condemn online harassment and offensive language.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomáš Zemčík.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zemčík, T. Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?. AI & Soc 36, 361–367 (2021). https://doi.org/10.1007/s00146-020-01053-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01053-4

Keywords

Navigation