skip to main content
10.1145/3498891.3498893acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnspwConference Proceedingsconference-collections
research-article

VoxPop: An Experimental Social Media Platform for Calibrated (Mis)information Discourse

Published: 27 December 2021 Publication History

Abstract

VoxPop, shortened for Vox Populi, is an experimental social media platform that neither has an absolute “truth-keeping” mission nor an uncontrolled “free-speaking” vision. Instead, it allows discourses that naturally include (mis)information to contextualize among users with the aid of UX design and data science affordances and frictions. VoxPop introduces calibration metrics, namely a Faithfulness-To-Known-Facts (FTKF) score associated with each post and a Cumulative FTKF (C-FTKF) score associated with each user, appealing to the self-regulated participation using sociocognitive signals. The goal of VoxPop is not to become an ideal platform—that is impossible; rather, to bring to attention an adaptive approach in dealing with (mis)information rooted in social calibration instead of imposing or avoiding altogether punitive moderation.

References

[1]
Aarif Alutaybi, Emily Arden-Close, John McAlaney, Angelos Stefanidis, Keith Phalp, and Raian Ali. 2019. How Can Social Networks Design Trigger Fear of Missing Out?. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). 3758–3765. https://doi.org/10.1109/SMC.2019.8914672
[2]
Jack Andersen and Sille Obelitz Søe. 2020. Communicative actions we live by: The problem with fact-checking, tagging or flagging fake news – the case of Facebook. European Journal of Communication 35, 2 (2020), 126–139. https://doi.org/10.1177/0267323119894489
[3]
Rudolf Arnheim. 1957. Art and visual perception: A psychology of the creative eye. Univ of California Press.
[4]
Dennis Assenmacher, Lena Clever, Lena Frischlich, Thorsten Quandt, Heike Trautmann, and Christian Grimme. 2020. Demystifying Social Bots: On the Intelligence of Automated Social Media Actors. Social Media + Society 6, 3 (2020), 2056305120939264. https://doi.org/10.1177/2056305120939264 arXiv:https://doi.org/10.1177/2056305120939264
[5]
Bence Bago, David G Rand, and Gordon Pennycook. 2020. Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines.Journal of experimental psychology: general(2020).
[6]
Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115, 37(2018), 9216–9221. https://doi.org/10.1073/pnas.1804840115 arXiv:https://www.pnas.org/content/115/37/9216.full.pdf
[7]
Jack M Balkin. 2020. How to regulate (and not regulate) social media. Knight Institute Occasional Paper Series1 (2020).
[8]
Albert Bandura. 2016. Moral disengagement: How people do harm and live with themselves.Worth publishers.
[9]
Robert E Bartholomew and Robert W Baloh. 2020. Challenging the diagnosis of ‘Havana Syndrome’ as a novel clinical entity. Journal of the Royal Society of Medicine 113, 1 (2020), 7–11.
[10]
Steve Benford, Chris Greenhalgh, Gabriella Giannachi, Brendan Walker, Joe Marshall, and Tom Rodden. 2012. Uncomfortable Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 2005–2014. https://doi.org/10.1145/2207676.2208347
[11]
Md Momen Bhuiyan, Amy X. Zhang, Connie Moon Sehat, and Tanushree Mitra. 2020. Investigating Differences in Crowdsourced News Credibility Assessment: Raters, Tasks, and Expert Criteria. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 93 (Oct. 2020), 26 pages. https://doi.org/10.1145/3415164
[12]
Kerstin Bongard-Blanchy, Arianna Rossi, Salvador Rivas, Sophie Doublet, Vincent Koenig, and Gabriele Lenzini. 2021. “I am Definitely Manipulated, Even When I am Aware of it. It’s Ridiculous!” - Dark Patterns from the End-User Perspective. Designing Interactive Systems Conference 2021 (Jun 2021). https://doi.org/10.1145/3461778.3462086
[13]
Danah M. Boyd and Nicole B. Ellison. 2007. Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication 13, 1 (2007), 210–230. https://doi.org/10.1111/j.1083-6101.2007.00393.x
[14]
Rupert Brown and Samuel Pehrson. 2019. Group processes: Dynamics within and between groups. John Wiley & Sons.
[15]
Fink Brunton. 2015. Spam: A Shadow History of the Internet. MIT Press. https://books.google.com/books?id=tJ0jEAAAQBAJ
[16]
Manuel Castells. 2011. The rise of the network society. Vol. 12. John Wiley & Sons.
[17]
Centers for Disease Control. 2021. COVID-19 Data Tracker. https://covid.cdc.gov/covid-data-tracker/datatracker-home
[18]
Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. The echo chamber effect on social media. Proceedings of the National Academy of Sciences 118, 9 (2021). https://doi.org/10.1073/pnas.2023301118 arXiv:https://www.pnas.org/content/118/9/e2023301118.full.pdf
[19]
Katherine Clayton, Spencer Blair, Jonathan A Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, 2019. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior (2019), 1–23.
[20]
S Coleman. 2018. The elusiveness of political truth: From the conceit of objectivity to intersubjective judgement. European Journal of Communication 33, 2 (2018), 157–171. https://doi.org/10.1177/0267323118760319
[21]
A.B. Cox and C.M. Rodríguez. 2020. The President and Immigration Law. Oxford University Press.
[22]
Anna L. Cox, Sandy J.J. Gould, Marta E. Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016. Design Frictions for Mindful Interactions: The Case for Microboundaries. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems(San Jose, California, USA) (CHI EA ’16). Association for Computing Machinery, 1389–1397.
[23]
Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. 2017. The Paradigm-Shift of Social Spambots: Evidence, Theories, and Tools for the Arms Race. In Proceedings of the 26th International Conference on World Wide Web Companion (Perth, Australia) (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 963–972. https://doi.org/10.1145/3041021.3055135
[24]
Michael A. DeVito, Jeremy Birnholtz, and Jeffery T. Hancock. 2017. Platforms, People, and Perception: Using Affordances to Understand Self-Presentation on Social Media. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW ’17). Association for Computing Machinery, New York, NY, USA, 740–754. https://doi.org/10.1145/2998181.2998192
[25]
Carl DiSalvo. 2010. Design, democracy and agonistic pluralism. In Design and Complexity - DRS International Conference. 1–10.
[26]
Verena Distler, Gabriele Lenzini, Carine Lallemand, and Vincent Koenig. 2020. The Framework of Security-Enhancing Friction: How UX Can Help Users Behave More Securely. In New Security Paradigms Workshop 2020 (Online, USA) (NSPW ’20). Association for Computing Machinery, New York, NY, USA, 45–58. https://doi.org/10.1145/3442167.3442173
[27]
Andrew J. Elliot and Markus A. Maier. 2014. Color Psychology: Effects of Perceiving Color on Psychological Functioning in Humans. https://www-annualreviews-org.ezproxy.depaul.edu/doi/10.1146/annurev-psych-010213-115035
[28]
Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. The Rise of Social Bots. Commun. ACM 59, 7 (June 2016), 96–104. https://doi.org/10.1145/2818717
[29]
Pabini Gabriel-Petit. 2007. Applying Color Theory to Digital Displays. https://www.uxmatters.com/mt/archives/2007/01/applying-color-theory-to-digital-displays.php
[30]
H.G. Gadamer. 2013. Truth and Method. Bloomsbury Publishing.
[31]
Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying Controversy on Social Media. Trans. Soc. Comput. 1, 1, Article 3 (Jan. 2018), 27 pages. https://doi.org/10.1145/3140565
[32]
Christine Geeng, Tiona Francisco, Jevin West, and Franziska Roesner. 2020. Social Media COVID-19 Misinformation Interventions Viewed Positively, But Have Limited Impact. arxiv:2012.11055 [cs.CY]
[33]
R. Stuart Geiger. 2016. Bot-based collective blocklists in Twitter: the counterpublic moderation of harassment in a networked public space. Information, Communication & Society 19, 6 (2016), 787–803. https://doi.org/10.1080/1369118X.2016.1153700
[34]
Nabeel Gillani, Ann Yuan, Martin Saveski, Soroush Vosoughi, and Deb Roy. 2018. Me, My Echo Chamber, and I: Introspection on Social Media Polarization(WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 823–831. https://doi.org/10.1145/3178876.3186130
[35]
Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
[36]
Ajeet Grewal, Jerry Jiang, Gary Lam, Tristan Jung, Lohith Vuddemarri, Quannan Li, Aaditya Landge, and Jimmy Lin. 2018. RecService: Distributed Real-Time Graph Processing at Twitter. In 10th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 18). USENIX Association, Boston, MA. https://www.usenix.org/conference/hotcloud18/presentation/grewal
[37]
Alex Hern. 2021. Twitter accidentally blocks users who post the word ’Memphis’. https://www.theguardian.com/technology/2021/mar/15/twitter-accidentally-blocks-users-who-post-the-word-memphis
[38]
Michael H. G. Hoffmann. 2020. Reflective Consensus Building on Wicked Problems with the Reflect! Platform. Science and Engineering Ethics 26, 2 (2020), 793–819. https://doi.org/10.1007/s11948-019-00132-0
[39]
Instagram. 2020. Keeping People Informed, Safe, and Supported on Instagram. Instagram (2020). https://about.instagram.com/blog/announcements/coronavirus-keeping-people-safe-informed-and-supported-on-instagram/.
[40]
Peter Jachim, Filipo Sharevski, and Paige Treebridge. 2020. TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls During the COVID-19 Pandemic. In New Security Paradigms Workshop 2020 (Online, USA) (NSPW ’20). Association for Computing Machinery, New York, NY, USA, 59–75. https://doi.org/10.1145/3442167.3442169
[41]
Jan Jekielek. 2020. Massive, Unexpected Growth on New Free Speech Platform, Bypassing Shadow Bans—Parler CEO John Matze. https://www.theepochtimes.com/massive-unexpected-growth-on-new-free-speech-platform-bypassing-shadow-bans-parler-ceo-john-matze_2963526.html
[42]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. ”Did You Suspect the Post Would Be Removed?”: Understanding User Reactions to Content Removals on Reddit. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 192 (Nov. 2019), 33 pages. https://doi.org/10.1145/3359294
[43]
Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online Harassment and Content Moderation: The Case of Blocklists. ACM Trans. Comput.-Hum. Interact. 25, 2, Article 12 (March 2018), 33 pages. https://doi.org/10.1145/3185593
[44]
Elena Karahanna, Sean Xin Xu, Yan Xu, and Nan Andy Zhang. 2018. The Needs–Affordances–Features Perspective for the Use of Social Media. MIS Q. 42, 3 (Sept. 2018), 737–756. https://doi.org/10.25300/MISQ/2018/11492
[45]
Farzaneh Karegar, John Sören Pettersson, and Simone Fischer-Hübner. 2020. The Dilemma of User Engagement in Privacy Notices: Effects of Interaction Modes and Habituation on User Attention. ACM Trans. Priv. Secur. 23, 1, Article 5 (Feb. 2020), 38 pages. https://doi.org/10.1145/3372296
[46]
Anastasia Kozyreva, Stephan Lewandowsky, and Ralph Hertwig. 2020. Citizens Versus the Internet: Confronting Digital Challaengs With Cognitive Tools. 21 (2020). https://doi.org/10.1177/1529100620946707
[47]
Jeanine Krath, Linda Schürmann, and Harald F. O. von Korflesch. 2021. Revealing the theoretical basis of gamification: A systematic review and analysis of theory in research on gamification, serious games and game-based learning. Computers in Human Behavior 125 (Dec 2021), 106963. https://doi.org/10.1016/j.chb.2021.106963
[48]
Nir Kshetri and Jeffrey Voas. 2017. The Economics of “Fake News”. IT Professional 19, 6 (2017), 8–12. https://doi.org/10.1109/MITP.2017.4241459
[49]
Sang Jib Kwon, Eunil Park, and Ki Joon Kim. 2014. What drives successful social networking services? A comparative analysis of user acceptance of Facebook and Twitter. The Social Science Journal 51, 4 (2014), 534–544.
[50]
Raph Levien. 1998. Advogado Trust Metric. http://advogato.org/
[51]
Nicholas J. Long and Henrietta L. Moore. 01 Mar. 2012. Sociality Revisited: Setting a New Agenda. The Cambridge Journal of Anthropology 30, 1 (01 Mar. 2012), 40 – 47.
[52]
Ulrik Lyngs, Kai Lukoff, Petr Slovak, William Seymour, Helena Webb, Marina Jirotka, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2020. ’I Just Want to Hack Myself to Not Get Distracted’: Evaluating Design Interventions for Self-Control on Facebook. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376672
[53]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
[54]
Christian Montag, Bernd Lachmann, Marc Herrlich, and Katharina Zweig. 2019. Addictive Features of Social Media/Messenger Platforms and Freemium Games against the Background of Psychological and Economic Theories. International Journal of Environmental Research and Public Health 16, 14(2019). https://doi.org/10.3390/ijerph16142612
[55]
Federico Monti, Fabrizio Frasca, Davide Eynard, Damon Mannion, and Michael M. Bronstein. 2019. Fake News Detection on Social Media using Geometric Deep Learning. arxiv:1902.06673 [cs.SI]
[56]
Brendan Nyhan and Jason Reifler. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior 32, 2 (2010), 303–330.
[57]
Anne Oeldorf-Hirsch, Mike Schmierbach, Alyssa Appelman, and Michael P. Boyle. 2020. The Ineffectiveness of Fact-Checking Labels on News Memes and Articles. Mass Communication and Society 23, 5 (2020), 682–704. https://doi.org/10.1080/15205436.2020.1733613 arXiv:https://doi.org/10.1080/15205436.2020.1733613
[58]
Sushil K. Oswal. 2019. Breaking the Exclusionary Boundary between User Experience and Access: Steps toward Making UX Inclusive of Users with Disabilities. In Proceedings of the 37th ACM International Conference on the Design of Communication (Portland, Oregon) (SIGDOC ’19). Association for Computing Machinery, New York, NY, USA, Article 12, 8 pages. https://doi.org/10.1145/3328020.3353957
[59]
Marinella Paciello, Carlo Tramontano, Annalaura Nocentini, Roberta Fida, and Ersilia Menesini. 2020. The role of traditional and online moral disengagement on cyberbullying: Do externalising problems make any difference?Computers in Human Behavior 103 (2020), 190–198.
[60]
Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt, and Casey Fiesler. 2016. Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms. In Proceedings of the 19th International Conference on Supporting Group Work (Sanibel Island, Florida, USA) (GROUP ’16). Association for Computing Machinery, New York, NY, USA, 369–374. https://doi.org/10.1145/2957276.2957297
[61]
Emma Peironi, Peter Jachim, Nathaniel Jachim, and Filipo Sharevski. 2021. Parlermonium: A Data-Driven UX Design Evaluation of the Parler Platform. In Critical Thinking in the Age of Misinformation CHI 2021.
[62]
PEN America. 2021. Defining “Online Abuse”: A Glossary of Terms. https://onlineharassmentfieldmanual.pen.org/defining-online-harassment-a-glossary-of-terms/
[63]
Gordon Pennycook, Jonathan A. Fugelsang, and Derek J. Koehler. 2015. What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology 80(2015), 34–72. https://doi.org/10.1016/j.cogpsych.2015.05.001
[64]
Gordon Pennycook and David G. Rand. 2020. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality 88, 2 (2020), 185–200. https://doi.org/10.1111/jopy.12476 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/jopy.12476
[65]
Gordon Pennycook and David G. Rand. 2021. The Psychology of Fake News. Trends in Cognitive Sciences 25, 5 (2021), 388–402. https://doi.org/10.1016/j.tics.2021.02.007
[66]
Scott R Peppet. 2011. Unraveling privacy: The personal prospectus and the threat of a full-disclosure future. Nw. UL Rev. 105(2011), 1153.
[67]
ProSocial Design Inc.2020. ProSocial Design network. https://www.prosocialdesign.org
[68]
Newley Purnell. 2021. Facebook Ends Ban on Posts Asserting Covid-19 Was Man-Made. https://www.wsj.com/articles/facebook-ends-ban-on-posts-asserting-covid-19-was-man-made-11622094890
[69]
Bahar Radfar, Karthik Shivaram, and Aron Culotta. 2020. Characterizing Variation in Toxic Language by Social Context. Proceedings of the International AAAI Conference on Web and Social Media 14, 1 (May 2020), 959–963. https://ojs.aaai.org/index.php/ICWSM/article/view/7366
[70]
Varun Rai and Ariane L. Beck. 2017. Play and learn: Serious games in breaking informational barriers in residential solar energy adoption in the United States. 27 (May 2017), 70–77. https://doi.org/10.1016/j.erss.2017.03.001
[71]
Sarah T. Roberts. 2018. Digital detritus: ’Error’ and the logic of opacity in social media content moderation. First Monday 23, 3 (Mar. 2018). https://doi.org/10.5210/fm.v23i3.8283
[72]
Alex Rochefort. 2020. Regulating Social Media Platforms: A Comparative Policy Analysis. Communication Law and Policy 25, 2 (2020), 225–260. https://doi.org/10.1080/10811680.2020.1735194 arXiv:https://doi.org/10.1080/10811680.2020.1735194
[73]
Jennifer Rose. 2020. To Believe or Not to Believe: an Epistemic Exploration of Fake News, Truth, and the Limits of Knowing. Postdigital Science and Education 2, 1 (2020), 202–216. https://doi.org/10.1007/s42438-019-00068-5
[74]
Yoel Roth and Nick Pickles. 2020. Updating our approach to misleading information. Twitter (2020). https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html.
[75]
Joni Salminen, Kathleen Guan, Soon-Gyo Jung, Shammur A. Chowdhury, and Bernard J. Jansen. 2020. A Literature Review of Quantitative Persona Creation. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376502
[76]
José Ramón Saura, Daniel Palacios-Marqués, and Agustín Iturricha-Fernández. 2021. Ethical design in social media: Assessing the main performance measurements of user online behavior modification. Journal of Business Research 129 (2021), 271–281. https://doi.org/10.1016/j.jbusres.2021.03.001
[77]
Filipo Sharevski, Raniem Alsaadi, Peter Jachim, and Emma Pieroni. 2021. Misinformation Warning Labels: Twitter’s Soft Moderation Effects on COVID-19 Vaccine Belief Echoes. https://arxiv.org/abs/2104.00779
[78]
Filipo Sharevski, Peter Jachim, and Emma Pieroni. 2020. WikipediaBot: Machine Learning Assisted Adversarial Manipulation of Wikipedia Articles. In Proceedings of DYNAMICS 2020: 2020 Workshop in DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security (Lexington, KY). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3477997.3478008
[79]
Qinlan Shen and Carolyn Rose. 2019. The discourse of online content moderation: Investigating polarized user responses to changes in reddit’s quarantine policy. In Proceedings of the Third Workshop on Abusive Language Online. 58–69.
[80]
Qinlan Shen, Michael Yoder, Yohan Jo, and Carolyn Rose. 2018. Perceptions of Censorship and Moderation Bias in Political Debate Forums. Proceedings of the International AAAI Conference on Web and Social Media 12, 1 (Jun. 2018). https://ojs.aaai.org/index.php/ICWSM/article/view/15002
[81]
Patrick E. Shrout and Joseph L. Rodgers. 2018. Psychology, Science, and Knowledge Construction: Broadening Perspectives from the Replication Crisis. Annual Review of Psychology 69, 1 (2018), 487–510.
[82]
Jeff Smith. 2017. Designing Against Misinformation. Medium (2017). https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2.
[83]
Todd Spangler. 2020. Twitter Flags 200 Trump Posts as False or Disputed Since Election Day - Variety. Variety (2020). https://variety.com/2020/digital/news/twitter-trump-200-disputed-misleading-claims-election-1234841137/.
[84]
Kate Starbird. 2020. How a Crisis Researcher Makes Sense of Covid-19 Misinformation. https://onezero.medium.com/
[85]
Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning, second edition: An Introduction (second edition ed.). Bradford Books.
[86]
Rungting Tu, Peishan Hsieh, and Wenting Feng. 2019. Walking for fun or for “likes”? The impacts of different gamification orientations of fitness apps on consumers’ physical activities. Sport Management Review 22, 5 (2019), 682–693.
[87]
Twitter. 2020. Information Operations. https://transparency.twitter.com/en/reports/information-operations.html
[88]
Twitter. 2021. Birdwatch: A community-driven approach to address misinformation on Twitter. https://twitter.github.io/birdwatch/about/overview/
[89]
José Van Dijck. 2013. The culture of connectivity: A critical history of social media. Oxford University Press.
[90]
Teun A Van Dijk. 2009. Society and discourse: How social contexts influence text and talk. Cambridge University Press.
[91]
Mark Warner, Andreas Gutmann, M. Angela Sasse, and Ann Blandford. 2018. Privacy Unraveling Around Explicit HIV Status Disclosure Fields in the Online Geosocial Hookup App Grindr. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 181 (Nov. 2018), 22 pages. https://doi.org/10.1145/3274450
[92]
Brian E. Weeks, Daniel S. Lane, Dam Hee Kim, Slgi S. Lee, and Nojin Kwak. 2017. Incidental Exposure, Selective Exposure, and Political Information Sharing: Integrating Online Exposure Patterns and Expression on Social Media. Journal of Computer-Mediated Communication 22, 6 (11 2017), 363–379. https://doi.org/10.1111/jcc4.12199
[93]
Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society 20, 11 (2018), 4366–4383. https://doi.org/10.1177/1461444818773059 arXiv:https://doi.org/10.1177/1461444818773059
[94]
Fiona Westin and Sonia Chiasson. 2019. Opt out of Privacy or ”Go Home”: Understanding Reluctant Privacy Behaviours through the FoMO-Centric Design Paradigm. In Proceedings of the New Security Paradigms Workshop (San Carlos, Costa Rica) (NSPW ’19). Association for Computing Machinery, New York, NY, USA, 57–67. https://doi.org/10.1145/3368860.3368865
[95]
Felix A Wichmann, Lindsay T Sharpe, and Karl R Gegenfurtner. 2002. The contributions of color to recognition memory for natural scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition 28, 3(2002), 509.
[96]
Bernard Williams. 2010. Truth and truthfulness. Princeton University Press.
[97]
James Williams. 2018. Stand Out of Our Light: Freedom and Resistance in the Attention Economy. Cambridge University Press.
[98]
Thomas Wood and Ethan Porter. 2019. The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior 41, 1 (2019), 135–163.
[99]
Bo Xu, Yun Huang, Haewoon Kwak, and Noshir Contractor. 2013. Structures of Broken Ties: Exploring Unfollow Behavior on Twitter(CSCW ’13). Association for Computing Machinery, New York, NY, USA, 871–876. https://doi.org/10.1145/2441776.2441875
[100]
Hualong Yang and Dan Li. 2021. Understanding the dark side of gamification health management: A stress perspective. Information Processing & Management 58, 5 (Sep 2021), 102649. https://doi.org/10.1016/j.ipm.2021.102649
[101]
Zhi Yang, Christo Wilson, Xiao Wang, Tingting Gao, Ben Y. Zhao, and Yafei Dai. 2014. Uncovering Social Network Sybils in the Wild. 8, 1, Article 2 (Feb. 2014), 29 pages. https://doi.org/10.1145/2556609
[102]
Waheeb Yaqub, Otari Kakhidze, Morgan L. Brockman, Nasir Memon, and Sameer Patil. 2020. Effects of Credibility Indicators on Social Media News Sharing Intent. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376213
[103]
Haifeng Yu, Phillip B. Gibbons, Michael Kaminsky, and Feng Xiao. 2008. SybilLimit: A Near-Optimal Social Network Defense against Sybil Attacks. In 2008 IEEE Symposium on Security and Privacy (sp 2008). 3–17. https://doi.org/10.1109/SP.2008.13
[104]
Savvas Zannettou, Tristan Caulfield, Emiliano De Cristofaro, Nicolas Kourtelris, Ilias Leontiadis, Michael Sirivianos, Gianluca Stringhini, and Jeremy Blackburn. 2017. The Web Centipede: Understanding How Web Communities Influence Each Other through the Lens of Mainstream and Alternative News Sources. In Proceedings of the 2017 Internet Measurement Conference (London, United Kingdom) (IMC ’17). Association for Computing Machinery, New York, NY, USA, 405–417. https://doi.org/10.1145/3131365.3131390
[105]
Savvas Zannettou, Tristan Caulfield, William Setzer, Michael Sirivianos, Gianluca Stringhini, and Jeremy Blackburn. 2019. Who Let The Trolls Out? Towards Understanding State-Sponsored Trolls. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci ’19). Association for Computing Machinery, New York, NY, USA, 353–362. https://doi.org/10.1145/3292522.3326016
[106]
Savvas Zannettou, Michael Sirivianos, Jeremy Blackburn, and Nicolas Kourtellis. 2019. The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans. J. Data and Information Quality 11, 3, Article 10 (May 2019), 37 pages. https://doi.org/10.1145/3309699
[107]
Daniel Yue Zhang, Rungang Han, Dong Wang, and Chao Huang. 2016. On robust truth discovery in sparse social media sensing. In 2016 IEEE International Conference on Big Data (Big Data). 1076–1081. https://doi.org/10.1109/BigData.2016.7840710

Cited By

View all
  • (2024)Discerning Individual Preferences for Identifying and Flagging Misinformation on Social MediaProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659545(110-119)Online publication date: 22-Jun-2024
  • (2024)Investigating the Mechanisms by which Prevalent Online Community Behaviors Influence Responses to Misinformation: Do Perceived Norms Really Act as a Mediator?Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641939(1-14)Online publication date: 11-May-2024
  • (2023)Fight fire with fireProceedings of the Nineteenth USENIX Conference on Usable Privacy and Security10.5555/3632186.3632188(19-36)Online publication date: 7-Aug-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
NSPW '21: Proceedings of the 2021 New Security Paradigms Workshop
October 2021
122 pages
ISBN:9781450385732
DOI:10.1145/3498891
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 December 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. emergent moderation
  2. inclusive usable security
  3. misinformation
  4. social media platform
  5. user incentive analysis

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

NSPW '21
NSPW '21: New Security Paradigms Workshop
October 25 - 28, 2021
Virtual Event, USA

Acceptance Rates

Overall Acceptance Rate 98 of 265 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)2
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Discerning Individual Preferences for Identifying and Flagging Misinformation on Social MediaProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659545(110-119)Online publication date: 22-Jun-2024
  • (2024)Investigating the Mechanisms by which Prevalent Online Community Behaviors Influence Responses to Misinformation: Do Perceived Norms Really Act as a Mediator?Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641939(1-14)Online publication date: 11-May-2024
  • (2023)Fight fire with fireProceedings of the Nineteenth USENIX Conference on Usable Privacy and Security10.5555/3632186.3632188(19-36)Online publication date: 7-Aug-2023
  • (2023)“I Just Didn’t Notice It:” Experiences with Misinformation Warnings on Social Media amongst Users Who Are Low Vision or BlindProceedings of the 2023 New Security Paradigms Workshop10.1145/3633500.3633502(17-33)Online publication date: 18-Sep-2023
  • (2022)Meaningful Context, a Red Flag, or Both? Preferences for Enhanced Misinformation Warnings Among US Twitter UsersProceedings of the 2022 European Symposium on Usable Security10.1145/3549015.3555671(189-201)Online publication date: 29-Sep-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media