Skip to main content

Advertisement

Log in

Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the major characteristics of ethics of care and the phases of care, we offer four algorithmic design principles. The first principle highlights the need to develop a strategy to deal with fake news on the part of the software designers. The second principle calls for the involvement of various stakeholders in the design processes in order to increase the chances of successfully fighting fake news. The third principle suggests allowing end-users to report on fake news. Finally, the last principle proposes keeping the end-user updated on the treatment in the suspected news items. Implementing these principles as care practices can render the developmental process more ethically oriented as well as improve the ability to fight fake news.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. https://techcrunch.com/2022/07/11/meta-launches-sphere-an-ai-knowledge-tool-based-on-open-web-content-used-initially-to-verify-citations-on-wikipedia (accessed 13 July 2022).

  2. Compare to the more traditional ethics (e.g., Kearns & Roth, 2019) which proposes that human subjects set values such as privacy, fairness, and explainability, and then embed them into objects such as algorithms.

  3. Our approach is universal in its nature and so it can be applied to a different types of algorithms like detection algorithms, models for analysis of the news spread, etc.

  4. In this paper, the content of fake news is not limited to articles only, but can also include images, audios, videos, and social media posts (Waisbord, 2018).

  5. A manifestation of the socio-technical aspects of fake news and its wickedness can be found in the mechanisms in which fake news spreads via social network platforms. Due to the logic of their algorithms that circulates content based on rising popularity, the amount of fake news has increased (Zhang & Ghorbani, 2020). Consequently, the impact of misinformation has increased (Borges, 2019; Vishwanath, 2015; for search engine effect see Conroy et al., 2015; Rubin et al., 2016). Sometimes fake news is shared as such, and sometimes it is shares as the truth. It turns out that humans are not very successful at distinguishing fake news from truth (see Bond & DePaulo, 2006; Pennock, 2019). Thus, without proper training and education, and without the aid of technological tools, people score 54% on tasks to distinguish truth from falsehood (Rubin et al., 2016; Soetekouw & Angelopoulos, 2022). Against this background, UNESCO (2019) has recommended to develop digital and AI skills that include identification and handling of fake news.

  6. A significant problem in this regard relates to so-called ‘ground truth theory’ suggesting that there should be an event that ‘grounds’ the information (e.g. news). For more on the relation between the ground truth theory and fake news see (Southwell et al., 2017).

  7. Some philosophers insist that ethics of care might have earlier philosophical origins. Slote (2007) traced it back to British sentimentalism, in particular to such philosophers like Shaftesbury and Hume.

  8. A similar view can be found in other contemporary theories such as postphenomenology, which stresses the role that technology play in establishing new relations between the subject and the world s/he lives in. (see Ihde, 1979, 1990; Liberati, 2016; Wellner, 2017, 2018; Rosenberger, 2017; Mykhailov, 2020).

  9. The emphasis on doing is not unique to ethics of care. For example, James Laidlaw (2013) discusses the relations between ethical responsibility and doing through the concept of agency (Like some other ethical frameworks, pp. 180–186). However, in this article we do not discuss the question of responsibility for the production and distribution of fake news. Instead we focus on the joint forces to fight it together with multiplicity of stake holders.

  10. The indication of the source can be expanded to context (for details see Record & Miller, 2022).

  11. Backward strategies are often employed for modeling various processes (including the spreading of fake news). In the case of fake news detection, some of these strategies can work backward in order to identify the source of the misinformation.

  12. Wide implementation of various mathematical models leads to significant moral issues in medicine (Mykhailov, 2021, 2022), law (Calo, 2015) and warfare (Sullins, 2010).

  13. https://opensource.org/osd (Accessed, 22 July 2022).

  14. One of these exceptions is the Linux system, where the end-user is able to suggest ‘check-ins’ to the code. These suggestions might solve some bugs and add some improved features to the system. This can be regarded as an effort to go outside the designer’s community only. However, it works only for users who are able to work with the code.

  15. Our suggestion here goes beyond Puig dela Bellacasa's expansion of Tronot's definition to regard the world as popoulated by humans and non-humans (Puig de la Bellacasa, 2017, p. 176). Here we try to grant non-humans like algorithms some form of responsibility and accountability.

  16. Some principles provided in our paper can go in line with several guidelines developed in the last few years. Although many of these guidelines do not directly relate to fake news some of them might be useful in developing software to battle fake news in the future. For example, the European Ethics Guidelines on trustworthy AI recommends diversity (in the whole AI life cycle) and transparency allowing mechanisms for reporting errors and interacting with the end-users, as offered in this article.

References

Download references

Funding

The work on this paper by Dr. Mykhailov has been supported financially by the Major Project of the National Social Science Fund of China: “The philosophy of technological innovations and the practical logic of Chinese independent innovation” (技术创新哲学与中国 自主创新的实践逻辑研究), Grant Number: 19ZDA040.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Galit Wellner.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wellner, G., Mykhailov, D. Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News. Sci Eng Ethics 29, 30 (2023). https://doi.org/10.1007/s11948-023-00450-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11948-023-00450-4

Keywords

Navigation