skip to main content
research-article
Open Access

The Misinformation Threat: A Techno-Governance Approach for Curbing the Fake News of Tomorrow

Published:07 December 2023Publication History

Skip Abstract Section

Abstract

Recent internationally relevant diplomatic and economic developments have placed a bright spotlight on the relevance of genuine and false claims. In a post-truth era, or as more commonly known, the era of fake news, an increasing interest has manifested in relation to how this phenomenon can and has been applied to thwart people's reasoning and behaviour. The advancements in modern technology have pushed scientists to question the scale and impact of fake news on humanity. Since technology has become closely integrated with the lives of humankind over the years, its pervasiveness is required to be taken into account to analyse how fake news is being generated and widely shared.

For a first-hand impression of the issue, a survey was designed and distributed to the general public to take part in. Additionally, a set of interviews were conducted with specialists from a number of fields in which participants expressed their views on the issue and their suggestions on curbing it. Despite differences in their views, a general consensus was reached that a multidisciplinary approach is needed to reduce the threat, and better educate and inform the public. In doing so, multiple entities and skilled individuals must be roped in to play their part in safeguarding information veracity and supporting end users.

The study also provides research insight into this impending threat that should give a clearer understanding of the fate of tomorrow's cyberspace as well as the societies we form a part of. The proposed recommendations, which combine both regulatory and technology fields, aim to bolster the capacity of nations and better equip governments in tackling the evolving state of fake news. It is envisioned that through these recommendations, societies move towards an age where both people and technology become part of the solution rather than the cause of such threats.

Skip 1INTRODUCTION Section

1 INTRODUCTION

The modern marvels of technology have brought about a digital era in which information can be found and accessed from anywhere. Prevalent means have made it easier for people to obtain the information they seek within cyberspace almost instantaneously. While this has been exceptionally convenient, advancements in technology have also conceived a new threat that endangers information accuracy and legitimacy, bringing about the emergence of fake news.

Exposure to fake news frequently occurs as people read through digital media, such as news articles and blogs, which may potentially be, or consist of, illegitimate information. While fake news has existed since man started communicating, manually developed online fake news has increased during the past decade. Misinformation is increasingly expected to be created by algorithms employing artificial intelligence (AI) concepts, especially those which are developed with machine learning, deep learning, and pattern analysis concepts at their core [59]. The resulting content is generated to spread false truths, which is harboured on pecialized websites with illegitimate news portals being the most common. When people are exposed to such content, the fact-checking process is often skipped and the information read and/or seen is believed at face value. Subsequently, these media are shared with family, friends, colleagues, and so on, reaching the public on social media while spreading fake news around unconsciously. Furthermore, repetitive exposure psychologically contributes to people believing in the manipulated content being read.

Such exposure has been recognised to affect all, irrespective of age. However, a study by the Stanford History Education Group [57] suggests that students, young and otherwise digital-savvy, who are still attending an educational institution, are greatly susceptible to misinformation. It is inconclusive as of yet whether other social factors, such as social status and occupation, affect people's behaviour differently. Nonetheless, even psychologists fall victim to believing and spreading fake news unknowingly [42].

It is worth mentioning that financial stock markets and popular advertising brands have been dealing with and combating different forms of fake news, including scams and fraud, for a number of years. However, the rapid advances in technology and the ease of access to web-based information has drastically heightened the risks and costs brought about by this threat. This has resulted in specialised websites containing a combination of credible yet distorted coverage of sell-side financial research and geopolitical news. Studies conducted by the cybersecurity firm CHEQ in conjunction with Baltimore University found that this greatly impacts price action and investor psychology, which can negatively impact businesses in a variety of ways [8].

In light of recent internationally relevant developments, such as the 2016 U.S. presidential election, the 2016 U.K. Brexit referendum, and the coronavirus hoaxes of 2020 to name a few, researchers have been led to believe that fake news has the ability to alter people's reasoning mechanisms and consequently, alter the outcome of a given event. This has resulted in mass propaganda and conspiracy theories, some of which have transpired into protests as well as armed assault [14, 59].

As suggested by these events, it is increasingly evident that the current threat may trigger a snowball effect, thus evolving into an issue that nations worldwide might not be sufficiently prepared or well-equipped to handle. Tackling this issue and raising further awareness of this imminent threat would bolster a culture of better fact-checking and vigilance amongst the public. Taking a governance approach in curtailing the matter may potentially aid in minimising the circulation of manually and machine-generated fake news while simultaneously supporting citizens with modern-day tools to identify illegitimate content.

1.1 Justification for the Research

In recent years, this threat has already influenced various fields of knowledge globally—from political election outcomes to stock prices to health awareness and more. AI consultants and technology suppliers warn that this is just the beginning of what's to come if the issue is left untreated [25].

While transpiring into troublesome and/or erratic outcomes, fake news has been quite plain and straightforward up until the time of writing, frequently consisting of primitive, unrefined stories that yet are able to change people's thinking habits. In the future, rapid advancements in technology may be leveraged to improve tailored, automatically generated and persuasive deceptive stories targeting individuals’ affinities and behaviours. These stories can be carefully tested before being released into the digital space in order to evaluate and gauge the optimal resulting impact. In other words, today's threat may evolve into something far more complex and sophisticated via the implementation of machine learning concepts. Moreover, injecting such information into social media platforms and information streams will spread and diffuse uncontrollably as has been witnessed over the last couple of years.

Albeit substantial efforts on the issue and initiatives to control it have been acknowledged, absence in the technology aspect of the problem has been recognised since unsupervised automated text generation is a fairly new field of knowledge. Therefore, a timely and relevant contribution in understanding the role of technology relative to the threat at hand, which can be used as a foundation for the containment of fake news diffusion, warrants investigation. Dealing with such an issue may very well be a daunting responsibility but ignoring it today will incur a problematic reality for the next generation of society. The promotion of effective solution designs fit for a modern world will ultimately aid people in distinguishing between fact and fiction.

1.2 Scope of Research

The purpose for carrying out this research was to investigate the influence and impacts of technology on the already-existing issue of fake news, and subsequently put forward recommendations to counteract it. A combination of primary and secondary research techniques was implemented throughout this research. Literature was sought to give a broad overview on the newly evolved threat and how it has ingrained itself in today's society. The primary research was based in Malta and was carried out through the dissemination of a digital survey to an audience in addition to interviews with specialists and professionals whose experience and careers overlap with the field of AI.

Skip 2RESEARCH AND RELATED WORKS Section

2 RESEARCH AND RELATED WORKS

The literature can be broadly classified into technology generation research that focuses on the identification, classification, rectification, and governance aspects. In reviewing the literature, gaps with respect to research work available in the study of automatic fake news generation and governance initiatives were identified.

2.1 The Practical Issue

Through available literature, the following subsections focus on the definition of fake news and the various forms it may take in addition to a concise analysis on how people succumb to its effects.

2.1.1 Defining Fake News.

As one might expect, the different scholarly interpretations of the term “fake news” are abundant in the existing literature. For Allcott and Gentzkow [2], it conveys the understanding of “distorted signals” far from the truth and goes on to define the term as deliberate false content with the possibility of misleading viewers while making reference to news articles. Others such as Fulgoni and Lipsman [20], give a collective and plain impression of fake news and conceive it as “digital pollution” that makes it harder for users to navigate the Internet. Lazer et al. [26] describe the term as fabricated content that is purposely false “yet perceivably realistic” as it achieves a degree of homogeneity with an individuals’ beliefs. Conforming with these three definitions, Visentin et al. [52] formulated an aggregated description of the term and as a result, established three criteria that make up fake news:

(a)

intentionally false

(b)

realistically portrayed

(c)

potentially verifiable

In addition to defining the problem, an anatomical examination of false content can be utilised to discern the level of urgency required for authorities to act on in relation to the threat posed to the public and security in general. Vasu et al. [50] established the following spectrum of fake news:

(1)

Disinformation: This type is associated with the conscious distribution of falsehoods and rumours to undermine levels of security such as large-scale hoaxes. These are characterised by distinct and carefully constructed stories frequently present on a multitude of online news portals and tabloids and it is often argued that it is by far the hardest to pursue [1].

(2)

Misinformation: Similar to the previous, this first type of misinformation relates to the propagation of falsehoods and rumours as part of a political agenda by a domestic group with different interpretations of facts based on ideological bias. Hence, it promotes a certain idea(s) as well as stimulates conspiracies to tarnish individuals’ or entities’ reputations [2]. The second type of misinformation consists of the unconscious propagation of falsehoods and rumours without a particular goal, achieving viral status that may or may not be of malicious intent. Otherwise referred to as inadvertent disinformation, this type of fake news is the product of reporting inaccuracy, weak barriers of entry on the Internet, and journalistic integrity deficiencies [45].

(3)

Entertainment: A type of fake news worth mentioning is that which resides in the form of satire and humour-invoking content. Although harmless at face value, recent studies have shown that motivated individuals may utilise irony to masquerade nefarious purposes while pushing forward extremist ideas [50].

(4)

Falsehoods distributed for financial gain: This category is relative to the distribution of fake stories to secure revenue from propaganda, advertising and/or manipulating stock market prices. Despite being the least harmful of the four categories, it has been occasionally noted to profit from inciting hate between individuals [50].

It can be reasoned that, collectively, fake news constructs an unsettling reality that widely impacts those who have little or no interest in seeking the truth. With the combination of hard facts, false information, and individualistic opinions, fake news “plays to the fears and prejudices of people” [28] ultimately leading to a change in perception and behaviour.

2.1.2 The Essence of Fake News.

The present situation is dominated by online blogs flooded with illegitimate content and biased responses as news media have become unreliable due to it becoming increasingly void of verifiable information. According to Hardalov et al. [21] this is mostly due to the “willingness of journalists to be the first to write about a hot topic” and subsequently, the source verification phase is omitted. The Guardian's editor-in-chief, Katharine Viner, concurs with this point and claims that popularity has managed to gain a higher significance than accuracy [51].

Such a circumstance is evidently greater on social media platforms where individuals share what they see online with each other, and make believe that they are knowledgeable of the truth because of a prevailing perception that one is in possession of valuable information without even verifying it [61]. Furthermore, Okoro et al. [35] emphasise that traditional fact-checking efforts by veteran journalists find it quite laborious to keep up with the large quantities of information being generated in cyberspace.

Concurrently, statistics portray a continuous deterioration of reputation and reliance on news-broadcasting companies and their capacity to inform the public of accurate and impartial news [43, 52]. Most notably, around 40% of American citizens deem mass media companies as trustworthy, and in the U.K. the most viewed news sources are the least-credible ones [43]. Such a shift has led the public to resort to auxiliary sources of information, particularly blogs and social media platforms, in search of unbiased news. Counterintuitively, this potentially makes online readers greatly predisposed to false or manipulated information [33].

Believing in illegitimate online content can stem from a variety of reasons. Taking social media giants Facebook and X (formerly known as Twitter) as examples, stories that are engaged with all appear in the same visual format. Articles from unreliable news sites such as “The Expose,” “Breaking-CNN,” or “Palmer Report” look similar to an article from a credible source such as “Reuters.” With a decreasing percentage of people reading beyond the headline, source recognition has become of crucial importance when considering that people process information from simple visual aids such as short phrases or images (Dans [10]).

Adding to this, the concern of “informational separation” [61] or “filter bubble,” as Pariser [38] and Viner [51] describe it, also prevents viewers from receiving full news coverage. These terms refer to a circumstance where two users, having contrasting or opposing opinions about a particular context, receive different search results when querying the same thing [37]. This is highly evident on social media platforms as people's news feeds are likely to consist of posts similar to one's contextual point of view, thus limiting access to other perspectives to enter the “bubble,” even if sought [61]. Moreover, on social media people tend to follow and befriend others who have sentiments in accordance to theirs, effectively constructing their own “echo chamber.”1

In tandem with the previously mentioned point, cognitive psychologists have presented their contribution on the illusory-truth effect where individuals rate repeated statements as true even if they are factually false. It can therefore be argued that continuous exposure to false information, whether heard, read, or seen, may become the de facto standard of the truth for some people. Such instances have been mostly prevalent during political campaigns to defile opposing electoral candidates (Okoro et al. [35]). In addition, confirmation bias plays a similar role in this context as people unconsciously interpret evidence in line with their existing views and principles [50]. Through their experiments, Hasher et al. [22] noted that people still believe false content to be true even though their judgement is wrong. Such findings have prompted the belief that the primary dilemma with deception is that humans themselves are not aware they are being deceived in the first place [41].

2.2 Existing Relevant Knowledge

The following subsections explore the automatic generation of false information from a technological point of view as well as aspects of how fake news has affected the economy and percolated through various industries in a time of global connectivity.

2.2.1 Machine-Generated Fake News.

Advances in machine learning, stemming from principles of AI, have paved the way for algorithms to create human-like sentences, known as Text Generative Models (TGM). Such algorithms have been used for story and conversational response generation, code and search engine auto-completion, as well as radiology report generation. Nonetheless, scholars have predicted the misuse of TGMs in the near future, also pertaining to the issue of fake news generation [24]. Since the chances of fake news detection by humans are slim, the authors suggest that accurate algorithmic models identifying synthetic text should be created to moderate content within cyberspace.

OpenAI's2 latest project, ChatGPT, involves the deployment of a TGM known as GPT-3, which while highly intelligent, has opened a new frontier of threats to the information landscape. The model settles many of its predecessor's shortcomings, GPT-2, allowing for sophisticated text generation with very few training sets [30]. In fact, in contrast with the previous model, GPT-3 can be prompted with a few social media posts from Facebook or X, forum threads, and even e-mails and it will instantly mimic the writing pattern and context making ChatGPT remarkably (but not fully) perceptive. In their experiments, Mcguffie and Newhouse [30] engaged with the model in more than one language and noted that GPT-3 exhibits “surprisingly robust multilingual language understanding and generation.” Although the text produced consisted of a few grammatical mistakes, it was deemed “highly understandable and ideologically consistent.” In contrast, past studies gave less attention to other languages spoken in large parts of the world.

Whereas training data has been diminished for such a model, hosting a TGM requires enterprise-scale resources. However, what has scholars worried is the fact that no barriers of technical knowledge have been identified to utilise such a model. Mcguffie and Newhouse [30] warn that unregulated imitations are a possibility, presenting substantial risks in relation to large-scale extremism and anarchism. It is for this reason that the authors, together with other scholars, push forward safeguards against the swift weaponisation of such technology.

The authors are of the understanding that, in order to alleviate the risks involved, effective policies and cross-industry partnerships are paramount. They shine a bright light on AI stakeholders and online service providers, policy-makers as well as governments to effectively incentivise educational initiatives and develop standards and policies in the hopes of preventing an incursion of machine-generated fake news in addition to nurture constructive online communities [24].

2.2.2 Market Manipulation.

The fake news phenomenon is relevant but not new to financial markets. False dissemination of information is in fact a traditional tactic in manipulating market prices. However, with the advances in technology, this phenomenon has adopted a 21st-century twist that neither the courts nor securities and exchange agencies can circumvent easily.

Petcu [39] describes pump-and-dump schemes,3 one of the tools fraudsters use in conducting alterations in the market. Carrying out such a scheme became far easier for fraudsters to execute with the help of social media, which enabled them to hide behind a screen. Such was the case for Audience Inc. and Sarepta Therapeutics as indicated by Watt [56]. With the help of the social media platform X, a fraudster was able to create two fake accounts, impersonating real market research firms Muddy Waters Research and Citron Research. These accounts were used to post multiple tweets consisting of false information against the aforementioned companies with the aim of lowering their stock prices. The fraudster eventually prevailed in doing so and went on to buy several stocks from an online brokerage account, only to sell them at a higher price later on. The nefarious plot resulted in the loss of more than $1.6 million for both companies.

Watt goes on to support the notion put forward by other scholars that social media has given criminals an edge in committing fraud by allowing them to maintain their anonymity while freely poisoning the information stream online. Resonating with Wineburg et al. [57] and Lima [28], the author also points out that, in comparison to other mediums, social media platforms remain widely unmonitored and unregulated. However, this does not mean that social media has taken over other digital means. Watt [56] also mentions market manipulation in the form of paid stock promotion schemes through fake media reports. While social media has paved the way for new forms of fraud to take place, it should also be recognised that false information also manifests itself in other digital media systems which are, for the most part, unregulated as well. Rather than phasing out, old fraud techniques seemingly evolve into new forms created by technological opportunities over the course of time.

2.2.3 Harmful Advertising.

In general, fake news has been seen to outweigh legitimate news in both popularity and engagement since the former triggers charged emotions more effectively than the latter [32, 52]. The novelty and emotional arousal characteristic of fake news pushes interest in the content, increasing the number of clicks made on particular content. Domenico and Visentin [13] have claimed that the more extreme the content is, the more people will react to it. Mills et al. [32] concur with this argument. The authors are of the understanding that so-called “opportunistic individuals and organisations” create eye-catching websites, coined “click-bait” [19], and populate these websites with unique yet false content and advertisements. The more viewers, the more traffic is driven to these sites and the higher the clicks, the higher the revenue gained from the companies paying for advert placement.

Waldrop [53] prompts that fake news can easily grow into insidiousness since such stories often contain controversial content. In a similar vein, Mills et al. [32] remark that, unlike traditional journalism, fake news stories are ethically ambiguous and being associated with questionable content in any form can have serious reputational and brand equity repercussions [2, 8, 13, 52]. Among the many brands found on fake news websites that have been referenced in their study, Mills et al. [32] have identified an array of peculiar sightings including “Girl Scouts advertisements were embedded in articles about jihadi sex crimes, advertisements for the American Red Cross were displayed alongside a comparison of school-shooting victims with Nazi symbolism, and advertisements for Hertz rental cars appeared next to an article titled ‘Are Liberal Pervs Sexually Obsessed with Refugees?’ ”.

Domenico and Visentin [13] also remarked that the coronavirus pandemic of 2020 sparked another massive wave of false content, linking the spread of the virus to 5G technology. This brought about vandalism and physical attacks by the ill-informed on many cell phone masts and telecom engineers across the U.K. [4, 55], thus suffering from a loss in reputation. Events such as these contribute to the facilitation model brought forward by Visentin et al. [52)], by which a news story's content can ultimately be rendered as behavioural attitudes towards brands.

2.2.4 Initiatives Against Fake News.

The existing literature has presented detailed harmful effects of fake news on societies. Nonetheless, a number of studies have also highlighted measures to counteract the issue by disrupting its diffusion.

What started as an anti-propaganda media source, the once crowd-funded non-governmental journalism project “Stopfake.org,” which was originally launched in 2014 to combat online fake news related to the Crimea crisis in Ukraine, became a beacon of hope in Europe [50]. By frequently uploading content on information vigilance through their site, podcasts, radio programs, and traditional and social media, the educational platform has introduced high journalistic standards and strives to raise the level of media literacy and awareness of the dangers of false information. The team is joined by media professionals proficient in 13 languages and achieve their goals through analysing propaganda, conducting verification training for a variety of interest groups, and participating in conferences and seminars [49].

Similarly, the government of Singapore launched a fact-checking segment, “Factually,” on its governmental website in 2012 to originally resolve any misinterpretation of government policies and public concerns [46, 50]. Factually publishes articles to the Singapore government site and its social media platforms regularly with the aim of informing and exposing false information on an array of topics [48]. In the same way, the European Union's External Action Service had set up a fact-checking site “EUvsDisinfo” in 2015 run by the East StratCom Task Force. The site aims to increase public awareness and understanding of the Russian Federation's disinformation campaigns while helping citizens within and outside European borders to “develop resistance to digital information and media manipulation” [16, 31].

Vasu et al. [50] point out that while such initiatives are beneficial, there are individuals who are not inclined to fact-check information due to their cognitive biases or digital illiteracy. Additionally, such fact-checking methods are quite tedious and require sufficient willing and/or motivated manpower to perform these verifications. The authors have also stated that these initiatives are being subject to ill-labelling by the public and are being wrongly accused of bias.

Big technology companies have also succumbed to being labelled as fake content publishers, especially when factoring in the scenarios discussed in previous sections. Their ability to hold their position in this dispute is gradually waning as a result of continuous governmental pressures [50, 52]. In response, these companies have deployed a mix of human- and algorithmic-based initiatives for self-regulation purposes; such is the tool Facebook has launched allowing users to flag posts deemed fake. These are reviewed by third-party fact-checkers forming part of the International Fact-Checking Network (IFCN).4 Likewise, China's WeChat application allows users to report users or entire chat groups for the dissemination of harmful content including false information. The reports are examined and archived in a fake news database to block similar content automatically in the future [60]. In recent years, Facebook has also worked in removing multiple fake accounts, both those managed by people and by bots,5 through pattern analysis instead of assessing the profiles’ published content [58]. By employing similar machine learning techniques to seek “false amplifiers of political stances, coordinated attempts to share and like certain posts, online harassment or the creation of inflammatory or racist content” [50], the company has struck a blow to the abusers by using the same technology against them.

Research has shown an inclination to investigate and counteract fake news on a governmental level. However, the materialisation of this ambition is repeatedly obstructed by criticisms supporting freedom of speech [28].

Skip 3RESEARCH METHODOLOGY Section

3 RESEARCH METHODOLOGY

3.1 Methods and Techniques Selected

A two-pronged approach, in the form of a survey and semi-structured interviews, was chosen. The scope of the chosen techniques was to reinforce one another in such a way that where surveys fail to retrieve information, interviews would fill in the gap. Furthermore, this methodological triangulation approach would also provide auxiliary data to ensure that the data being collected is valid, credible, and authentic for research purposes [9, 44]. Additionally, cross-sectional data collection techniques, involving phenomena exploration in participants at a moment in time, were favoured over longitudinal ones that carry out similar investigations over a period of time [44]. This technique was also chosen with the scope of better understanding the concept of the issue in question first-hand.

To execute this methodology, an online survey was hosted and distributed to the public followed by the execution of semi-structured interviews. The interviews were held with key individuals whose experience overlaps the field of study and were asked a brief set of questions on the matter. The data gathered from this hybrid approach was then analysed qualitatively and quantitatively.

Surveys were considered as a credible research tool for this investigation and Google Forms offers the versatility for the collection of information from individuals through a personalised manner. Electronic surveys are more practical than conventional paper-based ones due to the implicated low costs, accuracy, and speed of results with which data can be instantly and continuously reviewed [18]. These features made it overwhelmingly feasible and practical to choose such an approach. The apparent limitation to this approach is that of non-response bias. Fleming and Bowden [18] point out that participants within a selected sample may have attitudes and knowledge that strongly differ from others as well as attitudes to abstain or refuse to answer. This becomes more unfavourable when a difference in technical ability between respondents is evident, greatly affecting the outcome and conclusion of results.

The addition of interviews alongside surveys intended to gather more accurate and well-defined results. Essentially, interviews can help researchers gather valid and reliable data relevant to the study's objectives. Since the issue of fake news affects individuals differently, it was considered vital in further exploring the views of those who have been motivated to curtail the issue [5]. Semi-structured interviews contain elements from both structured and unstructured formats and such an approach offers the possibility of gathering crucial highly detailed information from respondents. Furthermore, the interviewer or researcher can direct the flow of primary data, implying total control over data collection as well as clarifying any issues with the respondents at any moment during the interview.

3.2 Research Procedures

An array of principles, concepts, arguments, and case studies stemming from different fields of knowledge, with the issue of fake news being a mutual component, were reviewed in the previous sections. The reviewed extensive literature allowed for the formulation of the online survey involving four themes (refer to Appendix Section A: Online survey questionnaire).

The demographic section of the survey, Section A.1, indicates the participants’ backgrounds including age span, levels of education reached, as well as employment. Section B.1 sheds light on participants’ information sources, that is, where they are likely to obtain their daily news and/or information from and what features make them choose these sources. Section C.1 introduces the element of trust towards media publishers and platforms and their reliability. Consequently, this section also sought to understand if participants have been moved or felt the need to verify what they come across online, particularly what verification methods were performed, and how frequent were these executed. The final section of the survey, Section D.1, presents participants with AI-related questions, notably AI's capability of generating and identifying false information, while also quantifying the need for interventions/initiatives against misinformation, if any.

The electronic survey, which was created and filled in through Google Forms, was configured in such a way that it did not collect participants’ confidential information such as full name, location, and e-mail address. Since the survey's target audience was the general public, questions and replies were kept direct and detailed by using simple language and avoiding technical jargon when instructing participants on what was expected from them when answering the survey. While the last section deferred from this logic, since the term “Artificial Intelligence” was included in two questions, the term was defined at the beginning of the section so that participants could still answer effectively. Additionally, out of the 15 questions put forward in the online survey, 13 were single choice, one was a multiple-choice, and one was open-ended. The single and multiple-choice questions were structured in such a way as to not hint at the most appropriate answer(s).

The survey was circulated on social media platforms Facebook and LinkedIn once, as well as via e-mail. No costs were incurred during the circulation of the survey. After collecting 323 responses, the survey results were downloaded as quantitative data. The data was then translated into visual aids to graphically compare participants’ responses on their sources of information, trust towards these sources, verification measures performed, and awareness of machine-generated fake news, among others. These have been illustrated in Section 4.

The extensive literature review also facilitated the design of the interview questions. These involved open-ended questions with the addition of follow-up questions to explore auxiliary concepts and principles brought up by interviewees where necessary. Similar to the survey, the interview questions followed a chronological structure with the initial question setting the scene by asking how interviewees interpret and define fake news relative to their fields of knowledge. The other questions were based on technology and initiatives, and interviewees were also expected to answer in relation to their professional experience to have different and distinctly rich replies (refer to Appendix Section B: Interview questions).

Due to the specialist and expert nature of individuals required for the purpose of the interviews, the professional social networking platform LinkedIn was considered as an appropriate and adequate means of searching for and contacting those with the necessary skills and experience. Search terms included the following:

(1)

Job title—consisting of “Artificial Intelligence” as a constant as well as the inclusion of an array of fields of knowledge including “Marketing,” “Economics,” “Finance,” “Legal,” “IT Regulation,” and “Information Security” separately (“Artificial Intelligence” + one field of knowledge).

(2)

Location—this was set to the geographical location where the research was taking place and hence, this was set to “Malta.

From the returned search results, the top five were selected (where applicable) and an invitation to participate in the research was sent out to these individuals on their LinkedIn profiles or through e-mail. The invitation explained the purpose of this study, their involvement, how the interviews would take place, and that their anonymity would be retained. A total of eight participants accepted the invitation to be interviewed, hailing from technology, marketing, legal, IT regulation, and information security backgrounds. The questions were sent to these participants before the interviews were held. The interviews were held online and recorded. Notes were also taken during the interviews to prompt participants to elaborate on any areas worth discussing.

After the interviews were held, the audio recordings were transcribed for analysis purposes. The recordings were permanently erased once all transcriptions were completed. The qualitative responses to every question discussed in the interviews were collated separately and manually studied. The conducted interviews also served useful in comparing the obtained responses from the interviewees to previously reviewed literature in Section 2. The interviews therefore served as an extension of the questionnaire in uncovering underlying insights that would not have surfaced with the sole deployment of the questionnaire.

After careful examination of both the quantitative data obtained from the survey as well as the qualitative data obtained from the interviews, meaningful arguments emerged. These arguments were used as a foundation of rich research findings and are discussed in the subsequent section that ultimately helped shape the research contribution put forward in Section 5.

Skip 4ANALYSIS AND INTERPRETATION Section

4 ANALYSIS AND INTERPRETATION

After being circulated on social media platforms Facebook and LinkedIn, as well as through e-mail, the online survey gathered a total of 323 responses. The resulting response rate was considered adequate and had exceeded initial expectations. With regard to interviews, the participants who accepted to be interviewed hailed from different fields of knowledge, with technology, particularly the field of AI, being a mutual one. The eight interviewed participants collectively showed a degree of eagerness to discuss the research topic and had both common and divergent insights to share on their experience with the subject matter. In order to reinforce the findings, the data collected was frequently compared to and put into context with other publicly available and reliable data published by other institutions tackling the same issue.

All participants acknowledged that the issue is a growing concern and similarly, the validity for its awareness. Since participants demonstrated an array of industry competences, they explained how the fake news issue has affected the dynamic of industries. While they seemed to agree on the potency of the issue leveraged through technology, the use of the latter as a countermeasure was not as favourable among participants. Ultimately, participants recognised that to curb this growing threat, man and machine must ultimately work in unison to achieve results whereby one cannot exist without the other.

4.1 Analysis

4.1.1 Online Survey Results.

Before analysing the results relevant to the research area, it is essential to give an idea of the demographic information collected from the participants (Figures 13). The next three figures give an indication of the age, education level, and occupational background of respondents. It is evident that the participant population was dominated by

Young adults between the ages of 21 and 30

A majority of 79.9% that have completed tertiary education

Individuals who are employed full- or part-time

Figure 4 shows three different sources of news that respondents rely on as their primary source of information on current affairs. The most popular means was that of social media platforms, such as Facebook and X, followed by traditional media companies (news-broadcasting companies). A survey carried out in 2021 by the Pew Research Centre in Washington, U.S., found that around 48% of American adults regularly obtain their news from social media, with Facebook surpassing all other social media sites [54]. In a similar fashion, research published by the European Commission Directorate-General for Communication indicates that 56% of respondents using social media networks obtain information through these channels suggesting that social media has become one of the top sources for online information [12].

Fig. 1.

Fig. 1. Participant segmentation with respect to age.

Fig. 2.

Fig. 2. The highest education levels possessed by participants.

Fig. 3.

Fig. 3. Participants’ occupational status.

Fig. 4.

Fig. 4. Participants’ primary sources of news coverage.

Figure 5 portrays participants’ reason(s) as to why they favoured one medium over another. It can be easily deduced that “convenience” and “ease of access,” as well as “timeliness,” were mutual factors sought out by respondents who chose traditional media companies and social media as their primary source of news. On the other hand, it can be noted that content accuracy and objectivity were two factors that were not wholly sought after by respondents. While it may be inconclusive from this chart whether ease of access and timeliness are two factors that make a particular medium and/or source popular among people, the consensus resonates greatly with The Guardian's editor-in-chief's claim that information accuracy is no longer sought by the public and hence, no longer given priority [51]. Equivalently, a report published by the Reuters Institute in conjunction with the University of Oxford claims that individuals do not differentiate well between sources based on actual editorial practises. Instead, “stylistic and presentational” characteristics that make reading experiences from sources more pleasing are given more value even if the media is not trustworthy [47].

Fig. 5.

Fig. 5. Favoured characteristics of online information channels among participants.

The following two figures illustrate how frequent participants share information they come across online with others (family, friends, acquaintances, etc.) and from where the information is shared. While the pie chart in Figure 6 may largely indicate that 41.2% of the sample population rarely share online information with others, the figure still implies that slightly below 44% of respondents often (“Frequently” and “Sometimes”) do share online information they come across. Additionally, Figure 7 sheds light on the sources of information participants usually tend to share information from. The bar chart clearly indicates that 60.1% of the time, people share content from social media rather than the actual source (35.3% of the time). In line with what has been discussed on the accuracy of information on social media in previous chapters, these findings may raise concerns on the legitimacy of the information being shared online. Data collected from the Flash Eurobarometer 464: Fake News and Disinformation Online suggests that 56% of the public consume content that has been shared by others on social media instead of going to the original source. However, the data from the said research in view of this study is inconclusive as it does not indicate whether the information shared originated from social media or from the original source outside social media.

Fig. 6.

Fig. 6. Frequency of information sharing among participants.

Fig. 7.

Fig. 7. Online sources where participants share information from and to others.

Global events similar to the coronavirus outbreak have reminded citizens of the value of independent journalism amidst the uncertainty that lies within the information stream. Nevertheless, online content continues to be regarded with great heed. Figure 8 below lends credence to this argument as 57.6% of respondents browse the information stream with caution, which has become increasingly significant in a digital media environment dominated by intermediaries, namely, search engines and social media to name a few. Heavy usage on these platforms, which provide negligible source context, can lead to drastic changes in audience behaviours and trust as discussed in previous sections.

Fig. 8.

Fig. 8. How sceptical participants are when consuming online information.

In fact, Figure 9 indicates how the pollution in the information stream has affected the public's trust towards certain media publishers and platforms whereby slightly more than 50% of respondents reacted to the existence of fake news online by reconsidering their sources and switched to more reliable sources of information. On the other hand, 28.8% of respondents (counting both blue and yellow segments on the chart) are not outright concerned by the false information that their preferred media publishers and/or platforms generate. A report published by the Reuters Institute in conjunction with the University of Oxford greatly emphasises the impact of trust on democracies. Trustworthy sources and information are no longer a want but a need of paramount value to “consider perspectives outside of our own narrow personal experiences” when navigating the world [47]. The absence of trust can already cause unrest, let alone its misplacement.

Fig. 9.

Fig. 9. How the existence of illegitimate content has affected participants’ trust towards media publishers.

The stacked bar chart in Figure 10 depicts the number of participants who perform verification methods in contrast to those who don't when coming across online content. Additionally, it indicates the kind of measures that are taken by participants in scrutinizing the legitimacy of information. Cross-checking, or cross-referencing, with different publishers and platforms was the most frequent verification method used by participants while a very small percentage of 6.2% of respondents take the time to fact-check (research) the information they are exposed to. It is also worth mentioning that the blue segment at the base of each bar, represents the number of participants who do not perform any verification measures, which accounts for 39% of the sample of respondents. This echoes concerns encountered in a study undertaken by the Stanford History Education Group where individuals, particularly students, are not as cautious as they should be when searching for information on current affairs online [57].

Fig. 10.

Fig. 10. The verification measures performed by participants to verify content accuracy.

The graph in Figure 11 represents participants’ understanding on the effectiveness of AI techniques in both generating and detecting fake news content. In comparing the two, results show that people consider AI to be highly effective in generating fake news rather than detecting it. While the yielded results may seem to indicate that the public still believes AI is capable of detecting fake news (“Effective”), it's crucial to note that 32.2% of respondents are still dubious of the latter (“Not sure”), instilling a sense of doubt and casting a shadow on the public's trust in AI's capabilities in this aspect. Past literature has shown that AI models and prototypes work fairly well in detecting false information and/or predicting the reliability of sources with high accuracy levels under lab conditions. Despite the effectiveness of these algorithms, researchers are also uncertain whether AI can solve the issue of misinformation but are convinced that it can help improve transparency. While the trust gap between the public and online information sources continues to widen, it would be “naïve to expect technology to solve the problem” [6]. Researchers are of the understanding that such an issue goes beyond technological mitigation means and would require efforts from all relevant stakeholders.

Fig. 11.

Fig. 11. Participants’ response to the effectiveness of artificial intelligence in developing and detecting false information.

Consequently, Figure 12 indicates that participants are not convinced that current measures to reduce the generation, spread, and effects of misinformation are making enough impact. More than 90% of participants have shown the need for stronger and stricter efforts to curtail it. This result is indirectly reflected in the findings observed by the Flash Eurobarometer 464: Fake News and Disinformation Online in which 83% of respondents consider the existence of information that misinterprets reality, or is just plain false, poses a threat to democracy [12] and echoes principles discussed in previous sections.

Fig. 12.

Fig. 12. Participants’ understanding when asked if more initiatives against misinformation are required.

Figure 13 gives a graphical representation of the public's concerns on the management of initiatives in countering fake news. While the high numbers lie among the default choices given in the survey, which are “News media publishers” (29%), “Non-governmental organisations” (21.9%), “Governments” (23.3%), and “Big technology companies” (24%), a small yet equally notable quantity of replies pointed out that educational institutions, regulatory authorities, and the private sector should also contribute in curbing the issue. The findings illustrated in this figure are comparable to the data collected by the Flash Eurobarometer 464: Fake News and Disinformation Online, indicating a large emphasis being made on journalists (45%) as well as press and broadcasting management companies (36%) to take action against this issue [12]. While these two were the most prominent actors, the research shows that national authorities, online social networks, European Union (EU) institutions, and non-governmental organisations (NGOs) have also been endorsed to act.

Fig. 13.

Fig. 13. Ownership of initiatives against misinformation.

As discussed in Section 2, the issue presents itself as a major challenge to nations especially in the form of large-scale campaigns. The results obtained through Figure 13 and the European Commission Directorate-General for Communication imply a coordinated response from the aforementioned actors. This resonates well with the set of suggestions put forward by the European Union through its “Action Plan Against Disinformation Factsheet” infographic that aims to “build up capabilities and strengthen cooperation between Member States and EU institutions” in order to address the fake news issue [17]. In its action plan, the EU calls for the following:

The funding of adequate digital tools, skills, and specialised staff.

The setting up of real-time Rapid Alert Systems and facilitation of knowledge exchange of fake news campaigns.

Regular compliance checks of the private sector including big technology and social media companies.

Constant citizen awareness and resilience campaigns and the support of independent media and fact-checking organisations.

4.2.2 Evaluation of Interviews.

During the second phase of data collection, eight participants were interviewed on the subject matter. Participants’ personal and professional views highlighted a number of similarities and differences between their opinions. The examination of their views raised the following arguments.

(A)

Knowledge of the Phenomenon

Participants associate the issue of fake news with snippets of information that do not have a delusive bias and thereby thwarting the truth. As discussed in previous sections, the intent can vary from social media satire to wide-scale usage by political parties that can have potentially adverse effects, including the manifestation of civil unrest, the toppling over of democracies, and the tearing of social fabric within democratic societies.

The issue has also extended and expanded progressively within the marketing sector, with the most common concern relating to fake customer reviews on e-commerce websites such as Amazon and eBay. Participants employed in this industry claimed that since a large number of customers heavily rely on product reviews before purchasing an item, customers would most likely be fooled and misled by these fake reviews. This may also negatively affect the competitiveness of businesses in the long run.

Having to do with terminology and articulation rather than the issue itself, participants within a regulatory advisory emphasised that the term “fake news” is inappropriate since the term “news” itself means verifiable information. Including the term “fake” with the latter only creates a paradox. The term “disinformation,” implying the communication of incorrect information (misinformation) through a guilty mind (law mens rea6), would be better suited in this context. While individuals may define the phenomenon in some way or another, an adequate legal term and description to describe it has yet to be conceived. This makes it harder to identify what false information is, what it looks like, and how to describe it. In relation to this, certain participants make reference to a 2018 IMCO7 report stating that governments should not prohibit the dissemination of “alternative information” as this impinges on the Human Rights Act articles 10 and 11 relating to the freedom of speech and expression of individuals.

Over time, the issue has become a global concern mostly because a high degree of online information is false. Where once such an issue was not given a lot of thought, today it has become a worry that automatically comes to mind when roaming the Internet. Participants recall cyberspace being a slightly more truthful place back in the early days of the digital era. By letting the issue fester, information accuracy has continued to degrade.

(B)

AI Capabilities—For Better or Worse

This rapid weaponisation of AI, exploited heavily at an unprecedented rate through social media networks, has steered society into a “click culture” spiral in which people's data is continuously harvested and monetised by large corporations, equating to great sums of advertising revenue. This is a struggle faced by most media companies supporting quality journalism since their revenue models cannot compare to those deployed in misinformation campaigns.

It is increasingly evident that the information ecosystem we roam in has become more polluted over time. Participants expressed their concerns on the possibility of our current society evolving into one built on foundations of false information. What is even more concerning is state-managed misinformation known as propaganda. A 2017 Harvard study found that the Chinese government coordinates an extreme amount of hoaxes per year based on the FUD8 principle. Once the honest and transparent communication between a nation's government and its citizens is manipulated, the social pact between the two entities is tarnished, harming the crucial ethical layer of this fragile communication channel. Furthermore, long periods of false information consumption may ultimately lead individuals to exhibit signs of cognitive bias, thereby creating a more polarised society that filters out contrarian views.

Despite ever-growing evidence of AI causing harm, it must also be recognised for the numerous times it has helped humanity in conquering demanding tasks and continuously expanding the borders of possibility, among other things. Image forensics and anomaly detection are two great cases of AI's beneficial capabilities, both of which are derivatives of pattern recognition that greatly help in identifying abnormalities in fabricated text, images, and possibly videos. Through algorithmic observation, machines are able to predict the speed and spread of misinformation through network nodes. Similar to a pandemic, such a prediction can aid individuals in taking necessary action, be it legal or technical, to halt and contain the dispersal. AI is also able to detect contextual dissonance, subtleties that are not so obvious to humans in automated texts and videos. Media that provides a divergent opinion to the truth is flagged by these pattern-recognising algorithms as contextually irrelevant, making it easier to segregate the fake from the real.

With all these concerns, participants still have mixed opinions on how close the world has come to having fully automated false information. Participants are of the understanding that AI is only capable of mimicking and enhancing human capabilities to a certain extent. Since machines do not harness full consciousness, they cannot fully comprehend the context and emotions of the text they are generating. However, “technology is meant to be pervasive” and since it is propelled forward by a human stimulus, it will continue to acquire human ingenuity as it improves. While the future of AI capabilities is unknown, the possibility that it will significantly improve has already been established.

(C)

Synchronised Effort

Participants have collectively realised that the first step in the long road ahead in curbing fake news is the introduction of initiatives that need to be less focused on technology and more to do with capabilities of ethics and trust. Participants suggested that a common ethical framework must be agreed upon in order to contribute to a society that values clarity, verified content, and through that, confidence. Such a framework would incorporate the following reporting principles:

(i)

Accuracy shall remain the cardinal analytical principle in journalism.

(ii)

Journalists need to be an independent voice without acting formally or informally on behalf of special interests.

(iii)

To make sure that a story is weighed in on both sides, an open-minded approach is taken and context is provided so that fairness in reporting shall be maintained.

(iv)

The protection of confidential sources.

(v)

Humanity shall be a guiding principle to refrain from depriving already-disadvantaged groups, thus adopting a social-justice-oriented view.

(vi)

In cases of error, journalists shall remain accountable and must promptly explain the reasons for error.

(vii)

Supporting the first and previous principles, transparency will allow for better and correct labelling.

Besides this, local media houses should consider associating themselves with international fact-checking institutions that have been founded on values of quality journalism, similar to those discussed in previous sections. From an international perspective, respondents have also expressed that not everyone has access to quality journalism. With the latter increasingly turning into a must rather than a need, affordable quality journalism would bolster media literacy among the public. Participants also agreed that educational reform is also necessary to not only make the public aware of misinformation but also to learn essential life skills. From a young age, people would benefit from such a reform as they would be able to effectively learn and practise skills including searching, verifying information, and ultimately discerning what is right from wrong. Such involvement from entities is affirmed by the survey results procured for this research as well as those by the European Commission Directorate-General for Communication.

Moving onto more regulatory matters, some participants pointed out that the self-regulation of social media and technology companies has proved to be, to a certain extent, useless. Demand for stricter laws by both the Digital Services Act and the Digital Market Act is on the rise as the situation of fake news gets worse. With Australia at the forefront of such legislation, European countries Germany and France have also started regulating these companies, followed by the U.K., which has been setting up specific authorities for these companies.

(D)

Closing Arguments

Humans and machines have different capabilities but the combination of the two results in a strong winning formula. In a generic scenario, data analysis would be performed by an algorithmic model while the review of that analysis would be performed by humans, followed by effective decision-making through the right judgement. The beauty of these algorithms is that “improvement of the model occurs over time.” The data that these models are fed will ultimately help guide them in identifying what is right and wrong, thus forming a conscience; and the more data they are fed, the higher the accuracy of these models in detecting misinformation.

The findings obtained from this research (survey and interviews) together with similar data from other studies suggest that there is no one solution that can entirely circumvent misinformation. A multidisciplinary approach hailing from different bodies of power, entities, and expert individuals is imperative to take on such a threat. The involvement of the government, together with field specialists, is vital for new and improved legislation, in addition to a revised educational syllabus across all educational levels. However, the results point out that governments alone will not bring about the required change. The private sector, NGOs, and academic and research institutions will also have a crucial role to play in gaining the communities’ trust and voicing their concerns, turning the matter into a shared responsibility.

Skip 5RECOMMENDATIONS Section

5 RECOMMENDATIONS

Based on the research undertaken on the issue of fake news and how technology has fuelled it in today's modern world and the multiple facades of society it has affected, the following recommendations, targeted at the Maltese jurisdiction, are being proposed.

(A)

A National Legislation Consideration

The results collected clearly indicate that the government is one of the larger potential entities that can be an influencer to herald the necessary countermeasures with respect to the spread of fake news. It is therefore suggested that, after discussions with relevant specialists and experts willing to contribute to societal improvements, an alteration of a current or inclusion of a new legislative bill regarding the publishing of online content be considered. The intent is to encourage relevant national entities to set up guidelines on how online content can be expressed and shared. This would allow simultaneous alignment with Actions 1 and 4 of the EU's action plan against disinformation, mandated for all EU Member States [23].

(B)

The Setting Up of a Relevant Authority

In line with the EU's action plan, in particular, Actions 7–9 that involve the bolstering of societal awareness and resilience, the government is also encouraged to set up an institution/entity with the aim of promoting awareness of the current situation around misinformation and supporting independent media and quality journalism. The Maltese Government had already founded the Malta Digital Innovation Authority9 (MDIA), which encourages technological developments across multiple sectors and consistently monitors such arrangements. Such an objective can be a potential extension to the MDIA's remit, providing support to citizens through the following:

(i)

Launch an anti-fake news campaign seeking to bolster the public's media literacy through audio and visual formats across a number of platforms.

(ii)

Issue a ranked list of online Maltese and foreign media outlets according to the degree of legitimacy and truthfulness of the content published.

(iii)

Formulate and draft a framework, aimed at online content publishers/businesses, to deliver guidance on how information should be presented online.

(iv)

Develop an AI-based moderation approach with the implementation of crowd-sourced feedback loops made up of selected individuals to help the AI approach establish a baseline on whether the content in question is true or false.

(C)

The Formulation of a Government ICT Policy Framework

Further, to the relevant considerations in a national legislation construct and corresponding to Actions 1 and 4 of the EU's action plan against disinformation, it is recommended that a Government of Malta Information Communication Technology (GMICT) policy10 framework, or similar, is correspondingly extended and encouraged across respective industries. Such an approach and enabling framework would be championed by the government, with the help of relevant specialists to formulate a detailed standard. Such a standard will aim to provide a deeper understanding of how content should be presented online by businesses so as to always project truthful information in cyberspace. The enforcement of this policy framework would potentially be taken up by the new authority mentioned in the previous point.

The collated data evidently Indic”tes ’hat there is a strong hypothesis that the issue of fake news has remained highly relevant. With the help of this data, governance recommendations were produced in the form of a sound proposal to be taken up and enforced by the relevant authorities and entities at a national administration level. This, however, does not mean that the public sector should be solely held responsible for the respective implementation. Other parties including the private sector and various NGOs should be equally driven to tackle the issue, whether through governance, AI-based approaches, or other means.

While it is acknowledged that the data collected through primary research can be potentially larger in the context of a global issue, the analysed data can be considered as a sound baseline forming this set of cautionary observations. This study serves as a forward steppingstone in terms of tackling misinformation from a governance perspective. Wider-scale research can provide further insight towards providing increasingly significant data, making room for more granular data analyses and recommendations.

Skip 6CONCLUSIONS Section

6 CONCLUSIONS

6.1 Research Conclusions

The disruptive impact of specific technology on already distorted channels of information exchange formed the basis of this study. The results collected confirm that fake news is still a major relevant issue that needs to be tackled on a global scale.

This extensive study was intended to examine the influence of technology on the already-existing issue of fake news and the extent of the impacts left on society. By conducting preliminary research on previous literature, it became apparent that the issue had been researched well and the involvement of technological advancement was arguably not a new concept. The literature provides sound academic insight on how the challenges have influenced the way people think and the repercussions affecting industries. However, the literature was found to be heavily technology-centric since the recommendations proposed primarily involved classes of algorithmic models to detect fake news, which frequently depended on significant computing power to be fed large quantities of data to come up with an effective outcome.

The data gathered indicates that people have become more conscious of the behavioural challenges brought about by false content, especially through technology's influence. As a result, people have become more suspicious and sceptical towards online content, creating trust disputes between consumers and online media publishers amongst others. Despite this, the data collected as well as the accompanying data from external research studies express significant concerns with respect to the popularity of social media platforms bolstered by their ease of access, convenience, and haste at which information is published. Additionally, the research clearly identifies a growing worry in the younger generation as they are less likely to confirm media they are exposed to.

In tandem, research has shown that awareness of machine learning models purposefully deployed to create and spread misinformation is also on the rise. Despite awareness efforts, research on the field of machine-generated text was found to be limited, suggesting that the challenge is relatively still in its early stages but maturing. Nonetheless, awareness alone will not fully protect consumers from false information, particularly when a decline in trust towards machine learning capabilities in detecting machine-generated content has been noted throughout the findings.

Research has indicated that fake news is a diverse problem with multiple tiers, complications, and outcomes, especially with the notable rise of machine-generated fake news. In fact, the study has shown that there is no one clear understanding of fake news yet and thus, it is quite tedious to map out all the possible impacts of the problem on society. Despite initiatives increasing over time, a coherent action plan has not yet been established and enforced by the highest forms of authority and change since the problem has not been clearly defined. Efforts to diminish the reach of fake news, from creation to diffusion to exposure, are not as effective as one would hope and without the necessary resources, such efforts will continue to dwindle if not encouraged and backed up considerably.

With the evaluation of the data collected from both primary research techniques together with the material collected from previous literature and findings from external research, a set of governance-based recommendations aimed at the Maltese Government were put forward. However, these recommendations are believed to be of value to other jurisdictions provided that enough contextual support is in place alongside necessary implementation resources. The recommendations put forward should be used as a starting point by the respective decision-makers and explored further for societal improvement. While several initiatives have been put into action worldwide with a partly established outcome, the automated curbing of fake news is still in its infancy.

The data compiled and the deductions presented on this matter have made significant developments to the knowledge capacity of misinformation when considering the scale of this study. While many questions have yet to be answered, it should be deliberated that a substantial and meaningful contribution to a crucial area has been produced, lending credence for solid policy interventions.

6.2 Future Work

The research performed focused on a specific issue that has ingrained itself in everyday life and the methodology chosen enabled the study to be undertaken successfully in this regard. Although the outcome is undoubtedly very enticing and academically insightful, there is ample opportunity for further investigative and experimental methods and techniques to introduce additional views, discussions, and outcomes. The research is exploratory in nature and investigates a concept on a broad scale. Thus, it can be considered as both a foundation and an opportunity for additional examination.

An interesting development of this study would be to perform a further study on a wider audience and scale. This would allow for a more diverse dataset that would greatly promote data analysis contribution. Additionally, it would show how significant of an issue misinformation is and how technology has revolutionised it in various parts of the world.

OpenAI's revolutionary chatbot, ChatGPT, has made a significant impact on the digital world and provides great examples of fluently written machine-generated text. While it is being used for a variety of cases, from writing academic essays to computer code, it is vital to keep in mind that answers may not always be correct. Conducting research into this model may help understand the level of perception it possesses and determine if it is able to distinguish between fact and forms of fiction, such as satire. Evaluating the accuracy of the information it churns out would be sound research to determine its strengths and weaknesses and hence, its limitations.

With regards to the repercussions of fake news, the perspectives brought up by past literature indicated the opportunity for further study. Supplementary investigation on the challenges faced by businesses in all sectors would greatly assist in obtaining an even better grasp of the issue's severity on the economy. The research can be extended to investigate other case studies, possibly similar to those discussed in this study. Subsequently, businesses may be able to identify why and how they were/are targeted by misinformation and how flaws within their routine operation were used against them and their consumers. The emergent patterns of targeting and exploitation could be used to develop a remediation plan or framework to ensure that businesses do not succumb to the ramifications of intentional malicious online misinformation.

In evaluating the rationale behind the “belief” towards false truths and, as a result, the change in behaviour of individuals exposed to fake news, the psychological dimension is considered to be imperative. While this research has taken into account this perspective from a distance, further inclusion of psychology-based literature, such as the role of the mind and memory in the process of content interpretation, would assist in uncovering other causes of this modern phenomenon. Furthermore, the findings of such a study may shed light on what would be required to develop technologies designed with fake news-relevant psychological principles in mind [34].

APPENDIX

A ONLINE SURVEY QUESTIONNAIRE

A.1  Demographics—participants’ demographic characteristics

(1)

State your age

(a)

Under 20

(b)

21–30

(c)

31–40

(d)

41–60

(e)

61+

(2)

What is your highest level of education?

(a)

Primary Education

(b)

Secondary Education

(c)

Bachelors

(d)

Masters

(e)

Doctorate

(f)

Other

(3)

State your current occupation

(a)

Unemployed

(b)

Student

(c)

Full- or part-time worker

B.1  Information source—the sources of information from which participants obtain news

(1)

Which is your primary source for online news coverage?

(a)

Traditional media companies

(b)

Social media

(c)

Online blogs

(2)

Give a reason for your answer to question (6) by selecting all which apply.

(d)

Timely

(e)

Convenient and easily accessible

(f)

Trustworthy and reliable

(g)

Accurate

(h)

Prominence/Reputational standing

(i)

Clear and concise

(j)

Balanced and objective

(k)

Other

(3)

Do you rely on social media platforms, such as Facebook and X, for news coverage?

(a)

Yes, it is my primary option when I search for news stories.

(b)

Yes, but not as my primary option when I search for news stories.

(c)

No, I do not make use of social media when I search for news stories.

(4)

How often do you share news stories with your family, friends, and acquaintances on social media?

(a)

Daily

(b)

Most times a week

(c)

Once or twice a week

(d)

Rarely

(e)

Never

(5)

On average, from where do you usually share news stories?

(a)

Social media

(b)

Original source

(c)

Other

C.1  Verification measures—measures taken by participants to verify content

(1)

In general when browsing online, how sceptical are you of the information you come across?

(a)

Confident

(b)

Neutral

(c)

Cautious

(2)

Has content illegitimacy affected your trust towards certain media publishers and platforms?

(a)

Yes, but I still visit the same sources.

(b)

Yes, and I now visit different sources.

(c)

Not sure.

(d)

No. I still visit the same sources.

(3)

State what kind of measures you take to verify content accuracy, if any. (non-mandatory question)

D.1  Countermeasures—participants’ opinions on diminishing the issue and its effects

(1)

How effective do you think technological methods, such as artificial intelligence, would be at developing convincing false information?

This question relates to artificial intelligence. Keep in mind that artificial intelligence is a machine's ability to mimic human cognitive capabilities such as perceiving, learning, reasoning, and solving problems.

(a)

Strongly effective

(b)

Effective

(c)

Not sure

(d)

Ineffective

(e)

Strongly ineffective

(2)

How effective do you think technological methods, such as artificial intelligence, would be at identifying, and possibly correcting, false information?

This question relates to Artificial Intelligence. Keep in mind that artificial intelligence is a machine's ability to mimic human cognitive capabilities such as perceiving, learning, reasoning, and solving problems.

(a)

Strongly effective

(b)

Effective

(c)

Not sure

(d)

Ineffective

(e)

Strongly ineffective

(3)

Do you think there is a need for more initiatives to take action against the issue of fake news?

(a) Yes, more initiatives are required to tackle the issue.

(b) Not sure.

(c) No there is no need for more initiatives on this front.

(4)

If you agree that more initiatives are required, who do you think should manage these initiatives? Select all that apply.

(a)

Non-governmental organisations (NGOs)

(b)

News media publishers

(c)

Governments

(d)

Big technology companies

(e)

Other

B INTERVIEW QUESTIONS

(1)

How do you define fake news? Why does how it is perceived matter?

(a) Follow up: With respect to your personal and professional experience, how does it affect us in our everyday lives?

(2)

What is your view on misinformation that is generated by machines?

(a) Follow up: Would this change the way we perceive the effects of fake news, and if so, how?

(b) Follow up: Will artificial intelligence become better at developing believable, human-like falsehoods?

(3)

Should technology be involved in the curbing of misinformation?

(a) Follow up: Do you believe machine learning algorithms would be effective in detecting fake news generated by other algorithms?

(b) Follow up: Would you consider the involvement of technology to improve or worsen the situation?

(4)

With what we've seen in recent years on this issue, including its outcomes, do you think there is a great need for more initiatives and solutions to curtail the issue?

(a) Follow up: What nature and/or form should these initiatives and solutions take?

(b) Follow up: Who should take ownership of, develop, and implement these initiatives and solutions?

Footnotes

  1. 1 A closed (digital) environment where the same ideas and beliefs are repeated and reinforced without criticism [2, 13, 53, 61]

    Footnote
  2. 2 An AI research and deployment company, located in San Francisco California, working to ensure that artificial intelligence benefits all of humanity [36].

    Footnote
  3. 3 The attempt of inflating stock prices through false, misleading, or greatly exaggerated recommendations with the aim of selling already owned stocks at a higher price than originally bought [11].

    Footnote
  4. 4 Launched in 2015 by the Poynter Institute, the IFCN sets a code of ethics and issues certificates to publishing organisations that pass fact-checking compliance audits [3].

    Footnote
  5. 5 A bot refers to a software application that runs automated tasks, known as scripts, to mimic human activity on the Internet on a large scale [15].

    Footnote
  6. 6 “Law mens re”’ is the Law Latin (L.L.) term for “the guilty mind” and refers to the mental element of an individual's intention to commit a crime. It can also refer to knowledge that one's action or negligence would lead to a crime [27].

    Footnote
  7. 7 The IMCO Committee is a European committee responsible for overseeing and scrutinising the EU rules with respect to the single market from a legislative perspective [7].

    Footnote
  8. 8 Fear, uncertainty, and doubt (FUD) is a propaganda tactic used in a variety of sectors including sales, marketing, public relations, and politics. It is used to influence people's perception through dubious or false information with the intent of manifesting fear [40].

    Footnote
  9. 9 The MDIA is a Maltese governmental organisation that promotes all governmental polices relating to technological innovation excellence while enforcing corresponding compliance standards.

    Footnote
  10. 10 GMICT policies are guidelines relevant to all Public Administration and are modified according to evolving business needs to work as a consistent standard [29].

    Footnote

REFERENCES

  1. [1] Ahmed H., Traore I., and Saad S.. 2017. Detection of online fake news using N-Gram analysis and machine learning techniques. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 10618, Springer, 127138. Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Allcott H. and Gentzkow M.. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives 31, 2 (2017), 211236. Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Ananth V.. 2019. Can Fact-Checking Emerge as Big and Viable Business? The Economic Times. https://economictimes.indiatimes.com/tech/internet/can-fact-checking-emerge-as-big-and-viablebusiness/articleshow/69210719.cmsGoogle ScholarGoogle Scholar
  4. [4] BBC News. 2020. Bradford MPs Warn of “Worrying Rise” in Mobile Phone Mast Attacks. https://www.bbc.com/news/uk-england-leeds-55423760Google ScholarGoogle Scholar
  5. [5] C. Boyce and P. Neale. 2006. Conducting In-Depth Interviews: A Guide for Designing and Conducting In-Depth Interviews (Vol. 2, Issue May). Pathfinder International. https://d1wqtxts1xzle7.cloudfront.net/33661461/m_e_tool_series_indepth_interviews-libre.pdf?1399628656=&response-content-disposition=inline%3B+filename%3DPAT_CONDUCTING_IN_DEPTH_INTERVIEWS_A_Gui.pdf&Expires=1700773721&Signature=X5cIIM1MosfNpNe~XzQrX47kPzSpJL3N73P65PoMmg7tHBJNKOsvPhzLQVt9boSu5VWbsr1Y-2VzAD6jC4o905TtFY6wpuub1zEnrUwUPeYwDg8zrijy~LOW4PxOtd1iD9T9TvtAtKqzGXE6N~DjB5LsJVJTSgUT9-7s~LWQj5NK5RWwJL5Hr5nNXcTucmK6YVuPjcmFxvmi7LFvrqE5K~uJTHmBOgXEOpJZXXBFdhhNN1hWQA8EszRAg0vu3oQZyY5SPRW~qnEm9qaIr9CkP1PbmI98IRrqc5OkITPbS-dDnfzGRPV1G~M05RDThOWCoM-DK13GkyvSQMM1gDjr1Q__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZAGoogle ScholarGoogle Scholar
  6. [6] Cassauwers T.. 2019. Can artificial intelligence help end fake news? European Commission. https://ec.europa.eu/research-and-innovation/en/horizon-magazine/can-artificial-intelligence-help-end-fake-newsGoogle ScholarGoogle Scholar
  7. [7] Cavazzini A.. 2020. IMCO - About Us. https://www.europarl.europa.eu/committees/en/imco/aboutGoogle ScholarGoogle Scholar
  8. [8] CHEQ and Cavazos R.. 2019. The economic cost of bad actors on the internet - fake influencer marketing in 2019. Journal of Advertising 43, 2 (2019). Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Dan V.. 2018. Empirical and Non-Empirical Methods. https://www.ls1.ifkw.uni-muenchen.de/personen/wiss_ma/dan_viorela/empirical_and_non_empirical.pdfGoogle ScholarGoogle Scholar
  10. [10] Dans E.. 2020. Twitter: Reading Beyond the Headlines. https://www.forbes.com/sites/enriquedans/2020/09/27/twitter-reading-beyond-the-headlines/Google ScholarGoogle Scholar
  11. [11] Dhir R.. 2019. Pump-and-Dump: Definition, How the Scheme is Illegal, and Types. https://www.investopedia.com/terms/p/pumpanddump.aspGoogle ScholarGoogle Scholar
  12. [12] Directorate-General for Communication. (March 12, 2018). Flash Eurobarometer 464: Fake News and Disinformation Online, European Commission. https://data.europa.eu/data/datasets/s2183_464_eng?locale=enGoogle ScholarGoogle Scholar
  13. [13] di Domenico G. and Visentin M.. 2020. Fake news or true lies? Reflections about problematic contents in marketing. International Journal of Market Research 62, 4 (2020), 409417. Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Donovan J.. 2020. Protest misinformation is riding on the success of pandemic hoaxes. MIT Technology Review (2020). https://www.technologyreview.com/2020/06/10/1002934/protest-propaganda-is-riding-on-the-success-of-pandemic-hoaxes/Google ScholarGoogle Scholar
  15. [15] Dunham K. and Melnick J.. 2008. Malicious Bots: An Inside Look into the Cyber-Criminal Underground of the Internet. Auerbach Publications. https://doi.org/10.1201/9781420069068Google ScholarGoogle Scholar
  16. [16] East StratCom Task Force. 2021. EU vs. Disinformation - About, EUvsDisinfo. https://euvsdisinfo.eu/about/Google ScholarGoogle Scholar
  17. [17] European Commission. 2018. Factsheet – Action Plan against Disinformation. https://digital-strategy.ec.europa.eu/en/library/factsheet-action-plan-against-disinformationGoogle ScholarGoogle Scholar
  18. [18] Fleming C. M. and Bowden M.. 2009. Web-based surveys as an alternative to traditional mail methods. Journal of Environmental Management 90, 1 (2009), 284292. Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Frampton B.. 2015. Clickbait: The changing face of online journalism. BBC News (2015). https://www.bbc.com/news/uk-wales-34213693Google ScholarGoogle Scholar
  20. [20] Fulgoni G. M. and Lipsman A.. 2017. The Downside of Digital Word of Mouth and the Pursuit of Media Quality. Journal of Advertising Research 57, 2 (2017), 127131. https://www.journalofadvertisingresearch.com/content/57/2/127Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Hardalov M., Koychev I., and Nakov P.. 2016. In Search of credible news. International Conference on Artificial Intelligence: Methodology, Systems, and Applications (2016), 172180. Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Hasher L., Goldstein D., and Toppino T.. 1977. Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior 16, 1 (1977), 107112. Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] High Representative of the Union for Foreign Affairs and Security Policy. 2018. Action Plan against Disinformation. https://www.eeas.europa.eu/sites/default/files/action_plan_against_disinformation.pdfGoogle ScholarGoogle Scholar
  24. [24] Jawahar G., Abdul-Mageed M., and Lakshmanan L. V. S.. 2020. Automatic detection of machine generated text: A critical survey. In The 28th International Conference on Computational Linguistics (COLING). http://arxiv.org/abs/2011.01314Google ScholarGoogle Scholar
  25. [25] W. Knight 2018. Fake news 2.0: Personalized, optimized, and even harder to stop. MIT Technology Review. https://www.technologyreview.com/2018/03/27/3116/fake-news-20-personalized-optimized-and-even-harder-to-stop/Google ScholarGoogle Scholar
  26. [26] Lazer D. M. J., Baum M. A., Benkler Y., Berinsky A. J., Greenhill K. M., Menczer F., Metzger M. J., Nyhan B., Pennycook G., Rothschild D., Schudson M., Sloman S. A., Sunstein C. R., E. A. Thorson , Watts D. J., and Zittrain J. L.. 2018. The science of fake news. Science 359, 6380 (2018), 10941096. Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Legal Information Institute. 2021. Mens Rea. Cornell Law School. https://www.law.cornell.edu/wex/mens_reaGoogle ScholarGoogle Scholar
  28. [28] Lima V.. 2020. Sustainable citizenship and the prospect of participation and governance in the digital era. Governance, Development, and Social Inclusion in Latin America. Springer. 99115. Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Malta Information Technology Authority (MITA). 2021. ICT Policy & Strategy: GMICT Policies. https://mita.gov.mt/portfolio/ict-policy-and-strategy/gmict-policies/Google ScholarGoogle Scholar
  30. [30] McGuffie K. and A. Newhouse 2020. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. http://arxiv.org/abs/2009.06807Google ScholarGoogle Scholar
  31. [31] Members’ Research Service. 2017. Disinformation, ‘fake news’ and the EU's response. European Parliamentary Research Service. https://epthinktank.eu/2017/11/20/disinformation-fake-news-and-the-eus-response/Google ScholarGoogle Scholar
  32. [32] Mills A. J., Pitt C., and Ferguson S. L.. 2019. The relationship between fake news and advertising: Brand management in the era of programmatic advertising and prolific falsehood. Journal of Advertising Research 59, 1 (2019), 38. https://www.researchgate.net/publication/331538082_The_Relationship_between_Fake_News_And_Advertising_Brand_Management_in_the_Era_Of_Programmatic_Advertising_and_Prolific_FalsehoodGoogle ScholarGoogle ScholarCross RefCross Ref
  33. [33] Mocanu D., Rossi L., Zhang Q., Karsai M., and Quattrociocchi W.. 2014. Collective Attention in the Age of (Mis)information. http://arxiv.org/abs/1403.3344Google ScholarGoogle Scholar
  34. [34] Nuccitelli D.. 2017. Fake News is a Threat to Humanity, but Scientists May Have a Solution. The Guardian. https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/dec/27/fake-news-is-a-threat-to-humanity-but-scientists-may-have-a-solutionGoogle ScholarGoogle Scholar
  35. [35] Okoro E. M., Abara B. A., Umagba A. O., Ajonye A. A., and Isa Z. S.. 2018. A hybrid approach to fake news detection on social media. Nigerian Journal of Technology 37, 2 (2018), 454462. Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] OpenAI. 2015. OpenAI: About. https://openai.com/aboutGoogle ScholarGoogle Scholar
  37. [37] Orlowski J.. 2020. The Social Dilemma. Netflix.Google ScholarGoogle Scholar
  38. [38] Pariser E.. 2011. The Filter Bubble: What the Internet is Hiding From You. Penguin Books Limited.Google ScholarGoogle Scholar
  39. [39] Petcu B.. 2018. Fake news and financial markets: A 21st century twist on market manipulation. American University Business Law Review 7, 2 (2018), 297326. https://ssrn.com/abstract=3233069Google ScholarGoogle Scholar
  40. [40] Pfaffenberger B.. 2000. The rhetoric of dread: Fear, uncertainty, and doubt (FUD) in information technology marketing. Knowledge, Technology & Policy 8, 2 (2000), 245250. Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Polage D. C.. 2012. Making up history: False memories of fake news stories. Europe's Journal of Psychology 8, 2 (2012), 245250. Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Reynolds E.. 2018. Why Our Brains Love Fake News—and How We Can Resist It. New York University. https://www.nyu.edu/about/news-publications/news/2018/june/jay-van-bavel-on-fake-news.htmlGoogle ScholarGoogle Scholar
  43. [43] Rubin V., Conroy N., Chen Y., and Cornwell S.. 2016. Fake News or Truth? Using Satirical Cues to Detect Potentially Misleading News. In NAACL-CADD2016: Workshop on Computational Approaches to Deception Detection at the 15th Annual Conference of the North American Chapter, 717. Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Saunders M., Lewis P., and Thornhill A.. 2016. Research Methods for Business Students (7th ed.). Pearson Education Limited.Google ScholarGoogle Scholar
  45. [45] Tandoc E. C., Lim Z. W., and Ling R.. 2017. Defining fake news: A typology of scholarly definitions. Digital Journalism 6, 2 (2017), 137153. Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] The Government of Singapore. 2020. Factually: Debunking Misinformation and Disinformation. The Singapore Government Agency Website. https://www.gov.sg/factuallyGoogle ScholarGoogle Scholar
  47. [47] The Media Reform Centre. 2021. Stopfake.org - About us. https://www.stopfake.org/ru/o-nas/Google ScholarGoogle Scholar
  48. [48] Toff B., Badrinathan S., Mont'Alverne C., Arguedas A. R., Fletcher R., and Nielsen R. K.. 2021. Listening to What Trust in News Means to Users: Qualitative Evidence from Four Countries. https://reutersinstitute.politics.ox.ac.uk/listening-what-trust-news-means-users-qualitative-evidence-four-countries#header–21Google ScholarGoogle Scholar
  49. [49] Toff B., Badrinathan S., Mont'Alverne C., Arguedas A. R., Fletcher R., and Nielsen R. K.. 2021. Overcoming Indifference: What Attitudes Towards News Tell Us About Building Trust. https://reutersinstitute.politics.ox.ac.uk/overcoming-indifference-what-attitudes-towards-news-tell-us-about-building-trust#header–0Google ScholarGoogle Scholar
  50. [50] Vasu N., Ang B., Jayakumar S., Faizal M., and Ahuja J.. 2018. Fake News: National Security in the Post-Truth Era. https://www.rsis.edu.sg/wp-content/uploads/2018/01/PR180313_Fake-News_WEB.pdfGoogle ScholarGoogle Scholar
  51. [51] Viner K.. 2016. How technology disrupted the truth. The Guardian. https://www.theguardian.com/media/2016/jul/12/how-technology-disrupted-the-truthGoogle ScholarGoogle Scholar
  52. [52] Visentin M., Pizzi G., and Pichierri M.. 2019. Fake news, real problems for brands: The impact of content truthfulness and source credibility on consumers’ behavioral intentions toward the advertised brands. Journal of Interactive Marketing 45 (2019), 99112. https://www.sciencedirect.com/science/article/abs/pii/S1094996818300525Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] M. M. Waldrop. 2017. News feature: The genuine problem of fake news. In Proceedings of the National Academy of Sciences, 12631--12634. https://www.researchgate.net/publication/321113489_News_Feature_The_genuine_problem_of_fake_newsGoogle ScholarGoogle Scholar
  54. [54] Walker M. and Matsa E. K.. 2021. News Consumption Across Social Media in 2021. Pew Research Center. https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021/Google ScholarGoogle Scholar
  55. [55] Waterson J. and A. Hern 2020. At least 20 UK phone masts andalized over false 5G coronavirus claims. The Guardian. https://www.theguardian.com/technology/2020/apr/06/at-least-20-uk-phone-masts-vandalised-over-false-5g-coronavirus-claimsGoogle ScholarGoogle Scholar
  56. [56] Watt S.-M.. 2018. The Effects of Fake News on the Practice of Investor Relations and Financial Communications. https://digitallibrary.usc.edu/CS.aspx?VP3=DamView&VBID=2A3BXZ8XQJR3K&SMLS=1&RW=910&RH=956#/DamView&VBID=2A3BXZ8XQJHO7&PN=1&WS=SearchResultsGoogle ScholarGoogle Scholar
  57. [57] Wineburg S., McGrew S., Breakstone J., and Ortega T.. 2016. Evaluating Information: The Cornerstone of Civic Online Reasoning. https://purl.stanford.edu/fv751yt5934Google ScholarGoogle Scholar
  58. [58] Woollaston-Webber V.. 2020. Facebook shuts down thousands of UK accounts in clamp down on fake news. Wired. https://www.wired.co.uk/article/facebook-fake-newsGoogle ScholarGoogle Scholar
  59. [59] Woolley S.. 2020. We're fighting fake news AI bots by using more AI. That's a mistake. MIT Technology Review. https://www.technologyreview.com/2020/01/08/130983/were-fighting-fake-news-ai-bots-by-using-more-ai-thats-a-mistake/Google ScholarGoogle Scholar
  60. [60] Zhou V.. 2016. How China's highly censored WeChat and Weibo fight fake news and other controversial content. South China Morning Post. https://www.scmp.com/news/china/policies-politics/article/2055179/how-chinas-highly-censored-wechat-and-weibo-fight-fakeGoogle ScholarGoogle Scholar
  61. [61] Zhuk D., Tretiakov A., Gordeichuk A., and Puchkovskaia A.. 2018. Methods to identify fake news in social media using artificial intelligence technologies. In International Conference on Digital Transformation and Global Society. 858, 446454. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The Misinformation Threat: A Techno-Governance Approach for Curbing the Fake News of Tomorrow

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Digital Government: Research and Practice
        Digital Government: Research and Practice  Volume 4, Issue 4
        December 2023
        150 pages
        EISSN:2639-0175
        DOI:10.1145/3636548
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 December 2023
        • Online AM: 13 November 2023
        • Accepted: 12 October 2023
        • Revised: 21 January 2023
        • Received: 7 August 2022
        Published in dgov Volume 4, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)1,165
        • Downloads (Last 6 weeks)429

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader