skip to main content
10.1145/3292522.3326012acmconferencesArticle/Chapter ViewAbstractPublication PageswebsciConference Proceedingsconference-collections
research-article

Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation

Published: 26 June 2019 Publication History

Abstract

Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Post-session questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning- Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.

References

[1]
John R Anderson and Robert Milson. 1989. Human memory: An adaptive perspective. Psychological Review, Vol. 96, 4 (1989), 703--719.
[2]
Hui Bai. 2018. Evidence that a large amount of low quality responses on MTurk can be detected with repeated GPS coordinates. (2018). https://goo.gl/19KCHG.
[3]
David Bawden and Lyn Robinson. 2009. The dark side of information: Overload, anxiety and other paradoxes and pathologies. J. of Information Science, Vol. 35, 2 (2009), 180--191.
[4]
Gaurav Bhatt, Aman Sharma, Shivam Sharma, Ankush Nagpal, Balasubramanian Raman, and Ankush Mittal. 2018. Combining neural, statistical and external features for fake news stance identification. In The Web Conf. (WWW). 1353--1357.
[5]
Jenna Burrell. 2016. How the machine `thinks': Understanding opacity in machine learning algorithms. Big Data & Society, Vol. 3, 1 (2016), 1--12.
[6]
Carole Cadwalladr. 2017. The great British Brexit robbery: how our democracy was hijacked. https://tinyurl.com/lkhgkdk. (2017). Accessed: 2019-01--10.
[7]
Casey Inez Canfield, Baruch Fischhoff, and Alex Davis. 2016. Quantifying phishing susceptibility for detection and behavior decisions. Human Factors, Vol. 58, 8 (2016), 1158--1172.
[8]
Michaela Cavanagh. 2018. Climate change: `Fake news', real fallout. (2018). https://goo.gl/tCbwYq Accessed: 2019-01--10.
[9]
Katherine Clayton, Spencer Blair, Jonathan A Busam, and et al. 2019. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior (2019), 1--23.
[10]
Niall J Conroy, Victoria L Rubin, and Yimin Chen. 2015. Automatic deception detection: Methods for finding fake news. In 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, Vol. 52. 1--4.
[11]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated experiments on ad privacy settings. In Privacy Enhancing Technologies. 92--112.
[12]
Serge Egelman, Lorrie Faith Cranor, and Jason Hong. 2008. You've been warned: An empirical study of the effectiveness of web browser phishing warnings. In ACM CHI. ACM, 1065--1074.
[13]
Craig Silverman et al. 2016. Hyperpartisan Facebook pages are publishing false and misleading information at an alarming rate. (2016). https://goo.gl/6pWtTT
[14]
Adrienne Porter Felt, Alex Ainslie, Robert W Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris, and Jeff Grimes. 2015. Improving SSL warnings: Comprehension and adherence. In ACM CHI. ACM, 2893--2902.
[15]
Mingkun Gao, Ziang Xiao, Karrie Karahalios, and Wai-Tat Fu. 2018. To label or not to label: The effect of stance and credibility labels on readers' selection and perception of news articles. ACM CHI, Vol. 2, CSCW (2018), 55.
[16]
David M Green and John A Swets. 1966. Signal detection theory and psychophysics .Wiley, New York, NY.
[17]
Andrew Guess, Jonathan Nagler, and Joshua Tucker. 2019. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, Vol. 5, 1 (2019), eaau4586.
[18]
Michael J Hautus. 1995. Corrections for extreme proportions and their biasing effects on estimated values of $d^prime$. Behavior Research Methods, Instruments, & Computers, Vol. 27, 1 (1995), 46--51.
[19]
Amir Herzberg and Ahmad Gbara. 2004. Trustbar: Protecting (even naive) web users from spoofing and phishing attacks. Technical Report. Cryptology ePrint Archive, Report 2004/155. http://eprint. iacr. org/2004/155.
[20]
Zhiwei Jin, Juan Cao, Han Guo, Yongdong Zhang, and Jiebo Luo. 2017. Multimodal fusion with recurrent neural networks for rumor detection on microblogs. In ACM Multimedia Conf. 795--816.
[21]
Colleen M Kelley and Larry L Jacoby. 2000. Recollection and familiarity: Process-dissociation. The Oxford handbook of memory, Endel E. Tulving and Fergus I. M. Craik (Eds.). Oxford University Press, New York, 215--228.
[22]
Eric Lin, Saul Greenberg, Eileah Trotter, David Ma, and John Aycock. 2011. Does domain highlighting help people identify phishing sites?. In ACM CHI. ACM, 2075--2084.
[23]
Elizabeth F Loftus. 2005. Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning & Memory, Vol. 12, 4 (2005), 361--366.
[24]
Neil A Macmillan and Douglas C Creelman. 2004. Detection theory: A user's guide .Lawrence Erlbaum, Mahwah, NJ.
[25]
Shivam B Parikh and Pradeep K Atrey. 2018. Media-rich fake news detection: A survey. In IEEE Conf. on Multimedia Information Processing and Retrieval (MIPR). IEEE, 436--441.
[26]
Frank Pasquale. 2015. The black box society: The secret algorithms that control money and information .Harvard University Press, Cambridge, MA.
[27]
Gordon Pennycook, Tyrone Cannon, and David G Rand. 2018. Prior exposure increases perceived accuracy of fake news. J. of Experimental Psychology: General, Vol. 147, 12 (2018), 1865--1880.
[28]
Gordon Pennycook and David G Rand. 2018. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition (2018).
[29]
Robert W Proctor and Jing Chen. 2015. The role of human factors/ergonomics in the science of security: decision making and action selection in cyberspace. Human Factors, Vol. 57, 5 (2015), 721--727.
[30]
Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Conf. on Empirical Methods in Natural Language Processing (EMNLP). 2931--2937.
[31]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In ACM SIGKDD int'l conf. on knowledge discovery and data mining (KDD). ACM, 1135--1144.
[32]
Henry L Roediger III and Kathleen B McDermott. 2000. Tricks of memory. Current Directions in Psychological Science, Vol. 9, 4 (2000), 123--127.
[33]
Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. ACM Conf. on Information and Knowledge Management (CIKM). ACM, 797--806.
[34]
Scott Shane. 2017. From headline to photograph, a fake news masterpiece. (2017). https://goo.gl/tmiw7s
[35]
Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, Vol. 19, 1 (2017), 22--36.
[36]
Kai Shu, Suhang Wang, and Huan Liu. 2018. Understanding user profiles on social media for fake news detection. In IEEE Conf. on Multimedia Information Processing and Retrieval (MIPR). 430--435.
[37]
John A Swets. 1964. Signal detection and recognition in human observers: Contemporary readings. Wiley, New York, NY.
[38]
Eugenio Tacchini, Gabriele Ballarin, Marco L Della Vedova, Stefano Moret, and Luca de Alfaro. 2017. Some like it hoax: Automated fake news detection in social networks. arXiv preprint arXiv:1704.07506 (2017).
[39]
Sander Van der Linden, Anthony Leiserowitz, Seth Rosenthal, and Edward Maibach. 2017. Inoculating the public against misinformation about climate change. Global Challenges, Vol. 1, 2 (2017).
[40]
Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 849--857.
[41]
Min Wu, Robert C Miller, and Simson L Garfinkel. 2006. Do security toolbars actually prevent phishing attacks?. In ACM CHI. ACM, 601--610.
[42]
Aiping Xiong, Robert W Proctor, Weining Yang, and Ninghui Li. 2017. Is domain highlighting actually helpful in identifying phishing web pages? Human Factors, Vol. 59, 4 (2017), 640--660.
[43]
Aiping Xiong, Robert W Proctor, Weining Yang, and Ninghui Li. 2018. Embedding training within warnings improves skills of identifying phishing webpages. Human Factors (2018).

Cited By

View all
  • (2025)Bans vs. Warning Labels: Examining Bystanders’ Support for Community-wide Moderation InterventionsACM Transactions on Computer-Human Interaction10.1145/3715116Online publication date: 5-Feb-2025
  • (2025)Countering Misinformation in Private Messaging Groups: Insights From a Fact-checking ChatbotProceedings of the ACM on Human-Computer Interaction10.1145/37011899:1(1-30)Online publication date: 10-Jan-2025
  • (2024)Journalistic interventions matter: Understanding how Americans perceive fact-checking labelsHarvard Kennedy School Misinformation Review10.37016/mr-2020-138Online publication date: 11-Apr-2024
  • Show More Cited By

Index Terms

  1. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WebSci '19: Proceedings of the 10th ACM Conference on Web Science
    June 2019
    395 pages
    ISBN:9781450362023
    DOI:10.1145/3292522
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 June 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. algorithm transparency
    2. decision aid
    3. misinformation
    4. warnings

    Qualifiers

    • Research-article

    Conference

    WebSci '19
    Sponsor:
    WebSci '19: 11th ACM Conference on Web Science
    June 30 - July 3, 2019
    Massachusetts, Boston, USA

    Acceptance Rates

    WebSci '19 Paper Acceptance Rate 41 of 130 submissions, 32%;
    Overall Acceptance Rate 245 of 933 submissions, 26%

    Upcoming Conference

    Websci '25
    17th ACM Web Science Conference
    May 20 - 24, 2025
    New Brunswick , NJ , USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)145
    • Downloads (Last 6 weeks)15
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Bans vs. Warning Labels: Examining Bystanders’ Support for Community-wide Moderation InterventionsACM Transactions on Computer-Human Interaction10.1145/3715116Online publication date: 5-Feb-2025
    • (2025)Countering Misinformation in Private Messaging Groups: Insights From a Fact-checking ChatbotProceedings of the ACM on Human-Computer Interaction10.1145/37011899:1(1-30)Online publication date: 10-Jan-2025
    • (2024)Journalistic interventions matter: Understanding how Americans perceive fact-checking labelsHarvard Kennedy School Misinformation Review10.37016/mr-2020-138Online publication date: 11-Apr-2024
    • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
    • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
    • (2024)The Landscape of User-centered Misinformation Interventions - A Systematic Literature ReviewACM Computing Surveys10.1145/367472456:11(1-36)Online publication date: 25-Jun-2024
    • (2024)Designing Better Credibility Indicators: Understanding How Emerging Adults Assess Source Credibility of Misinformation Identification and LabelingCompanion Publication of the 2024 ACM Designing Interactive Systems Conference10.1145/3656156.3665126(41-44)Online publication date: 1-Jul-2024
    • (2024)Investigating Influential Users' Responses to Permanent Suspension on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373568:CSCW1(1-41)Online publication date: 26-Apr-2024
    • (2024)Trust and Transparency: An Exploratory Study on Emerging Adults' Interpretations of Credibility Indicators on Social Media PlatformsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650801(1-7)Online publication date: 11-May-2024
    • (2024)A Browser Extension for in-place Signaling and Assessment of MisinformationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642473(1-21)Online publication date: 11-May-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media