Abstract
This article suggests several design principles intended to assist in the development of ethical algorithms exemplified by the task of fighting fake news. Although numerous algorithmic solutions have been proposed, fake news still remains a wicked socio-technical problem that begs not only engineering but also ethical considerations. We suggest employing insights from ethics of care while maintaining its speculative stance to ask how algorithms and design processes would be different if they generated care and fight fake news. After reviewing the major characteristics of ethics of care and the phases of care, we offer four algorithmic design principles. The first principle highlights the need to develop a strategy to deal with fake news on the part of the software designers. The second principle calls for the involvement of various stakeholders in the design processes in order to increase the chances of successfully fighting fake news. The third principle suggests allowing end-users to report on fake news. Finally, the last principle proposes keeping the end-user updated on the treatment in the suspected news items. Implementing these principles as care practices can render the developmental process more ethically oriented as well as improve the ability to fight fake news.
Similar content being viewed by others
Notes
Compare to the more traditional ethics (e.g., Kearns & Roth, 2019) which proposes that human subjects set values such as privacy, fairness, and explainability, and then embed them into objects such as algorithms.
Our approach is universal in its nature and so it can be applied to a different types of algorithms like detection algorithms, models for analysis of the news spread, etc.
In this paper, the content of fake news is not limited to articles only, but can also include images, audios, videos, and social media posts (Waisbord, 2018).
A manifestation of the socio-technical aspects of fake news and its wickedness can be found in the mechanisms in which fake news spreads via social network platforms. Due to the logic of their algorithms that circulates content based on rising popularity, the amount of fake news has increased (Zhang & Ghorbani, 2020). Consequently, the impact of misinformation has increased (Borges, 2019; Vishwanath, 2015; for search engine effect see Conroy et al., 2015; Rubin et al., 2016). Sometimes fake news is shared as such, and sometimes it is shares as the truth. It turns out that humans are not very successful at distinguishing fake news from truth (see Bond & DePaulo, 2006; Pennock, 2019). Thus, without proper training and education, and without the aid of technological tools, people score 54% on tasks to distinguish truth from falsehood (Rubin et al., 2016; Soetekouw & Angelopoulos, 2022). Against this background, UNESCO (2019) has recommended to develop digital and AI skills that include identification and handling of fake news.
A significant problem in this regard relates to so-called ‘ground truth theory’ suggesting that there should be an event that ‘grounds’ the information (e.g. news). For more on the relation between the ground truth theory and fake news see (Southwell et al., 2017).
Some philosophers insist that ethics of care might have earlier philosophical origins. Slote (2007) traced it back to British sentimentalism, in particular to such philosophers like Shaftesbury and Hume.
A similar view can be found in other contemporary theories such as postphenomenology, which stresses the role that technology play in establishing new relations between the subject and the world s/he lives in. (see Ihde, 1979, 1990; Liberati, 2016; Wellner, 2017, 2018; Rosenberger, 2017; Mykhailov, 2020).
The emphasis on doing is not unique to ethics of care. For example, James Laidlaw (2013) discusses the relations between ethical responsibility and doing through the concept of agency (Like some other ethical frameworks, pp. 180–186). However, in this article we do not discuss the question of responsibility for the production and distribution of fake news. Instead we focus on the joint forces to fight it together with multiplicity of stake holders.
The indication of the source can be expanded to context (for details see Record & Miller, 2022).
Backward strategies are often employed for modeling various processes (including the spreading of fake news). In the case of fake news detection, some of these strategies can work backward in order to identify the source of the misinformation.
https://opensource.org/osd (Accessed, 22 July 2022).
One of these exceptions is the Linux system, where the end-user is able to suggest ‘check-ins’ to the code. These suggestions might solve some bugs and add some improved features to the system. This can be regarded as an effort to go outside the designer’s community only. However, it works only for users who are able to work with the code.
Our suggestion here goes beyond Puig dela Bellacasa's expansion of Tronot's definition to regard the world as popoulated by humans and non-humans (Puig de la Bellacasa, 2017, p. 176). Here we try to grant non-humans like algorithms some form of responsibility and accountability.
Some principles provided in our paper can go in line with several guidelines developed in the last few years. Although many of these guidelines do not directly relate to fake news some of them might be useful in developing software to battle fake news in the future. For example, the European Ethics Guidelines on trustworthy AI recommends diversity (in the whole AI life cycle) and transparency allowing mechanisms for reporting errors and interacting with the end-users, as offered in this article.
References
Anderson, M., & Anderson, S. L. (2011). Machine ethics. Cambridge University Press.
Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: Problems, causes, solutions. Digital Journalism, 6(2), 154–175.
Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. In FAT* 2020—Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 210–219).
Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234. https://doi.org/10.1207/s15327957pspr1003_2
Borges, P. M. (2019). The role of beliefs and behavior on Facebook: A semiotic approach to algorithms, fake news, and transmedia journalism. International Journal of Communication, 13, 603–618.
Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103, 513–563.
Chakraborty, T. (2021). Dynamics of fake news diffusion. In Deepak, P., T. Chakraborty, C. Long & Santhosh Kumar, G. (Eds.), Data science for fake news (Vol. 42, pp. 101–127). Springer. https://link.springer.com/chapter/10.1007/978-3-030-62696-9_5
Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. https://doi.org/10.1002/pra2.2015.145052010082
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press. https://doi.org/10.7551/mitpress/11805.001.0001
de Laat, P. B. (2014). From open-source software to Wikipedia: ‘Backgrounding’ trust by collective monitoring and reputation tracking. Ethics and Information Technology, 16(2), 157–169. https://doi.org/10.1007/S10676-014-9342-9
Fazio, L. K., Brashier, N. M., Keith Payne, B., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993–1002. https://doi.org/10.1037/XGE0000098
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Gimpel, H., Heger, S., Kasper, J., & Schäfer, R. (2020). The power of related articles—Improving fake news detection on social media platforms. In Proceedings of the annual Hawaii international conference on system sciences (HICSS), (pp. 6063–6072). https://doi.org/10.24251/HICSS.2020.743
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
Held, V. (2006). The ethics of care: Personal, political, and global—Virginia Held, distinguished professor of philosophy Virginia Held—Google books. Oxford University Press.
Henkel, L. A., & Mattson, M. E. (2011). Reading is believing: The truth effect and source credibility. Consciousness and Cognition, 20(4), 1705–1721. https://doi.org/10.1016/J.CONCOG.2011.08.018
Ihde, D. (1979). Technics and praxis. (24 vol.) Springer.
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.
Ihde, D. (2022). Postphenomenology, the empirical turn and “Transcendentality.” Foundations of Science, 27(3), 851–854. https://doi.org/10.1007/s10699-020-09741-6
Johnson, D. G. (2011). Computer systems: Moral entities but not moral agents. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 168–183). Cambridge University Press.
Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
Kim, A., & Dennis, A. R. (2018). Says who? The effects of presentation format and source rating on fake news in social media. Mis Quarterly, 43(3), 1025–1039 https://doi.org/10.2139/SSRN.2987866
Laidlaw, J. (2013). The subject of virtue: An anthropology of ethics and freedom. Cambridge University Press.
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Liberati, N. (2016). Technology, phenomenology and the everyday world: A phenomenological analysis on how technologies mould our world. Human Studies, 39(2), 189–216. https://doi.org/10.1007/s10746-015-9353-5
Lugea, J. (2021). Linguistic approaches to fake news detection. In Deepak, P., T. Chakraborty, C. Long, & Santhosh Kumar, G. (Eds.), Data science for fake news (pp. 287–302). Springer.
Martin, A., Myers, N., & Viseu, A. (2015). The politics of care in technoscience. Social Studies of Science, 45(5), 625–641. https://doi.org/10.1177/0306312715602073
Michelfelder, D. P., & Jones, S. A. (2016). From caring about sustainability to developing care-ful engineers. In Walter Leal Filho & Susan Nesbit (Eds.), New developments in engineering education for sustainable development (pp. 173–184). Springer.
Michelfelder, D. P., Wellner, G., & Wiltse, H. (2017). Designing differently: Toward a methodology for an ethics of feminist technology design. In S. O. Hansson (Ed.), The ethics of technology: Methods and approaches (pp. 193–218). Rowman & Littlefield.
Mitcham, C. (2009). Convivial software: An end-user perspective on free and open source software. Ethics and Information Technology, 11(4), 299–310. https://doi.org/10.1007/S10676-009-9209-7
Molina, M. D., Sundar, S. S., Le, T., & Lee, D. (2021). “Fake News” is not simply false information: A concept explication and taxonomy of online content. American Behavioral Scientist, 65(2), 180–212. https://doi.org/10.1177/0002764219878224
Mykhailov, D. (2020). The phenomenological roots of technological intentionality: A postphenomenological perspective. Frontiers of Philosophy in China, 15(4), 612–635. https://doi.org/10.3868/s030-009-020-0035-6
Mykhailov, D. (2021). A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics. Human Affairs, 31(2), 149–164. https://doi.org/10.1515/humaff-2021-0013
Mykhailov, D. (2022). Postphenomenological variation of instrumental realism on the “problem of representation”: fMRI imaging technology and visual representations of the human brain. Prometeica Journal of Philosophy and Science, 2022, 64–78. https://doi.org/10.34024/prometeica.2022.Especial.13520
Mykhailov, D., & Liberati, N. (2022). A study of technological intentionality in C++ and generative adversarial model: Phenomenological and postphenomenological perspectives. Foundations of Science, 2022, 1–17. https://doi.org/10.1007/S10699-022-09833-5
Nair, I., & Bulleit, W. M. (2019). Pragmatism and care in engineering ethics. Science and Engineering Ethics, 26(1), 65–87. https://doi.org/10.1007/S11948-018-0080-Y
Nallur, V. (2020). Landscape of machine implemented ethics. Science and Engineering Ethics, 26(5), 2381–2399. https://doi.org/10.1007/s11948-020-00236-y
Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media and Society, 20(10), 3720–3737. https://doi.org/10.1177/1461444818758715
Noddings, N. (1986). Caring: A feminine approach to ethics and moral education. University of California Press.
Pennock, R. T. (2019). An instinct for truth: Curiosity and the moral character of science. MIT Press.
Puig de la Bellacasa, M. (2017). Matters of care: Speculative ethics in more than human worlds. University of Minnesota Press.
Record, I., & Miller, B. (2022). People, posts, and platforms: Reducing the spread of online toxicity by contextualizing content and setting norms. Asian Journal of Philosophy, 1, 41.
Rittel, H. W., & Webber, M. M. (1974). Wicked problems. Man-Made Futures, 26(1), 272–280.
Rosenberger, R. (2017). Notes on a nonfoundational phenomenology of technology. Foundations of Science, 22(3), 471–494. https://doi.org/10.1007/s10699-015-9480-5
Rubin, V. L., Conroy, N., Chen, Y., & Cornwell, S. (2016). Fake news or truth? Using satirical cues to detect potentially misleading news. In Proceedings of the second workshop on computational approaches to deception detection (pp. 7–17). https://doi.org/10.18653/V1/W16-0802
Santhosh Kumar, G. (2021). Deep learning for fake news detection. In Deepak, P., T. Chakraborty, C. Long, & Santhosh Kumar, G. (Eds.), Data science for fake news (Vol. 42, pp. 71–100). Springer. https://link.springer.com/chapter/10.1007/978-3-030-62696-9_4
Scantamburlo, T. (2021). Non-empirical problems in fair machine learning. Ethics and Information Technology, 23(4), 703–712. https://doi.org/10.1007/S10676-021-09608-9/METRICS
Schiaffonati, V. (2022). Explorative experiments: A paradigm shift to deal with severe uncertainty in autonomous robotics. Perspectives on Science, 30(2), 284–304. https://doi.org/10.1162/POSC_A_00415
Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media. ACM SIGKDD Explorations Newsletter, 19(1), 22–36. https://doi.org/10.1145/3137597.3137600
Singh, J. P., Kumar, A., Rana, N. P., & Dwivedi, Y. K. (2020). Attention-based LSTM network for rumor veracity estimation of tweets. Information Systems Frontiers, 2020, 1–16. https://doi.org/10.1007/S10796-020-10040-5/TABLES/7
Slote, M. (2007). The ethics of care and empathy. Routledge.
Soetekouw, L., & Angelopoulos, S. (2022). Digital resilience through training protocols: Learning to identify fake news on social media. Information Systems Frontiers, 1, 1–17. https://doi.org/10.1007/S10796-021-10240-7/TABLES/8
Southwell, B. G., Thorson, E. A., & Sheble, L. (2017). The persistence and peril of misinformation: Defining what truth means and deciphering how human brains verify information are some of the challenges to battling widespread falsehoods. American Scientist, 105(6), 372. https://doi.org/10.1511/2017.105.6.372
Sullins, J. P. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275. https://doi.org/10.1007/s10676-010-9241-7
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science Magazine, 751–752.
Tronto, J. C. (2020). Moral boundaries : A political argument for an ethic of care. Routledge. https://doi.org/10.4324/9781003070672
UNESCO. (2019). Preliminary study on the ethics of Artificial Intelligence.
Verbeek, P.-P. (2009). Cultivating humanity: Towards a non-humanist ethics of technology. In J. K. B. Olsen, E. Selinger, & S. Riis (Eds.), New waves in philosophy of technology (pp. 241–263). Palgrave.
Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.
Vishwanath, A. (2015). Diffusion of deception in social media: Social contagion effects and its antecedents. Information Systems Frontiers, 17(6), 1353–1367. https://doi.org/10.1007/s10796-014-9509-2
Waisbord, S. (2018). Truth is what happens to news. Journalism Studies, 19(13), 1866–1878. https://doi.org/10.1080/1461670X.2018.1492881
Wellner, G. (2017). I-media-world: The algorithmic shift from hermeneutic relations to writing relations. In Y. Van Den Eede, S. O'Neal Irwin, & G. Wellner (Eds.), Postphenomenology and media: Essays on human–media–world relations (pp. 207–228). Lexington Books.
Wellner, G. (2018). From cellphones to machine learning a shift in the role of the user in algorithmic writing. In A. Romele & E. Terrone (Eds.), Towards a philosophy of digital media (Vol. 33, pp. 205–224). Springer. https://doi.org/10.1007/978-3-319-75759-9_11
Wellner, G. (2020). The multiplicity of multistabilities: Turning multistability into a multistable concept. In G. Miller & A. Shew (Eds.), Reimagining philosophy and technology, reinventing Ihde (pp. 105–122). Springer. https://link.springer.com/chapter/10.1007/978-3-030-35967-6_7
Zhang, X., & Ghorbani, A. A. (2020). An overview of online fake news: Characterization, detection, and discussion. Information Processing and Management, 57(2), 102025. https://doi.org/10.1016/J.IPM.2019.03.004
Zhou, X., & Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys, 53(5), 1–40. https://doi.org/10.1145/3395046
Funding
The work on this paper by Dr. Mykhailov has been supported financially by the Major Project of the National Social Science Fund of China: “The philosophy of technological innovations and the practical logic of Chinese independent innovation” (技术创新哲学与中国 自主创新的实践逻辑研究), Grant Number: 19ZDA040.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wellner, G., Mykhailov, D. Caring in an Algorithmic World: Ethical Perspectives for Designers and Developers in Building AI Algorithms to Fight Fake News. Sci Eng Ethics 29, 30 (2023). https://doi.org/10.1007/s11948-023-00450-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11948-023-00450-4