skip to main content
research-article
Public Access

"Defaulting to boilerplate answers, they didn't engage in a genuine conversation": Dimensions of Transparency Design in Creator Moderation

Published:16 April 2023Publication History
Skip Abstract Section

Abstract

Transparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid to unpacking what transparency entails in moderation design, especially for content creators. We interviewed 28 YouTubers to understand their moderation experiences and analyze the dimensions of moderation transparency. We identified four primary dimensions: participants desired the moderation system to present moderation decisions saliently, explain the decisions profoundly, afford communication with the users effectively, and offer repairment and learning opportunities. We discuss how these four dimensions are mutually constitutive and conditioned in the context of creator moderation, where the target of governance mechanisms extends beyond the content to creator careers. We then elaborate on how a dynamic, transparency perspective could value content creators' digital labor, how transparency design could support creators' learning, as well as implications for transparency design of other creator platforms.

References

  1. Julia Alexander. 2019. YouTube moderation bots punish videos tagged as gay' or lesbian,' study finds. The Verge. Retrieved from https://www.theverge.com/2019/9/30/20887614/youtube-moderation-lgbtq-demonetization-terms-words-nerd-city-investigationGoogle ScholarGoogle Scholar
  2. Julia Alexander. 2019. LGBTQ YouTubers are suing YouTube over alleged discrimination. The Verge. Retrieved from https://www.theverge.com/2019/8/14/20805283/lgbtq-youtuber-lawsuit-discrimination-alleged-video-recommendations-demonetizationGoogle ScholarGoogle Scholar
  3. Ali Alkhatib and Michael Bernstein. 2019. Street--level algorithms: A theory at the gaps between policy and decisions. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300760Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20, 3 (March 2018), 973--989. DOI:https://doi.org/10.1177/1461444816676645Google ScholarGoogle ScholarCross RefCross Ref
  5. Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Fem Media Stud (2021). DOI:https://doi.org/10.1080/14680777.2021.1928259Google ScholarGoogle ScholarCross RefCross Ref
  6. Sophie Bishop. 2018. Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence: The International Journal of Research into New Media Technologies 24, 1 (2018), 69--84. DOI:https://doi.org/10.1177/1354856517736978Google ScholarGoogle ScholarCross RefCross Ref
  7. Lindsay Blackwell, Mark Handel, Sarah T. Roberts, Amy Bruckman, and Kimberly Voll. 2018. Understanding bad actors" online. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 1--7. DOI:https://doi.org/10.1145/3170427.3170610Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qual Res Psychol 3, 2 (January 2006), 77--101. DOI:https://doi.org/10.1191/1478088706qp063oaGoogle ScholarGoogle ScholarCross RefCross Ref
  9. Robyn Caplan and Tarleton Gillespie. 2020. Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy. Soc Media Soc 6, 2 (2020). DOI:https://doi.org/10.1177/2056305120936636Google ScholarGoogle ScholarCross RefCross Ref
  10. Bobo Chan. 2020. Is Being A YouTuber Still Lucrative? Jumpstart Magazine. Retrieved from https://www.jumpstartmag.com/is-being-a-youtuber-still-lucrative/Google ScholarGoogle Scholar
  11. Stevie Chancellor, Jessica Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016. #thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing - CSCW '16, ACM Press, New York, New York, USA. DOI:https://doi.org/http://dx.doi.org/10.1145/2818048.2819963Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A Cross-Community Learning-Based System to Assist Reddit Moderators. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019). DOI:https://doi.org/10.1145/3359276Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales. Proc ACM Hum Comput Interact 2, CSCW (November 2018), 1--25. DOI:https://doi.org/10.1145/3274301Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. DOI:https://doi.org/10.1145/3290605Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Lars Thøger Christensen and George Cheney. 2015. Peering into Transparency: Challenging Ideals, Proxies, and Organizational Practices. Communication Theory 25, 1 (February 2015), 70--90. DOI:https://doi.org/10.1111/COMT.12052Google ScholarGoogle ScholarCross RefCross Ref
  16. Nicholas Diakopoulos and Michael Koliska. 2017. Algorithmic Transparency in the News Media. Digital Journalism 5, 7 (August 2017), 809--828. DOI:https://doi.org/10.1080/21670811.2016.1208053Google ScholarGoogle ScholarCross RefCross Ref
  17. Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on reddit. In Conference on Human Factors in Computing Systems - Proceedings, Association for Computing Machinery, New York, New York, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300372Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Brooke Erin Duffy. 2020. Algorithmic precarity in cultural work. Communication and the Public 5, 3--4 (September 2020), 103--107. DOI:https://doi.org/10.1177/2057047320959855Google ScholarGoogle ScholarCross RefCross Ref
  19. Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. ?I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (April 2015), 153--162. DOI:https://doi.org/10.1145/2702123Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of Eating Disorders through Content Moderation. Proc ACM Hum Comput Interact 4, CSCW1 (May 2020). DOI:https://doi.org/10.1145/3392845Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Casey Fiesler, Jialun Aaron Jiang, Joshua McCann, Kyle Frye, and Jed R. Brubaker. 2018. Reddit rules! Characterizing an ecosystem of governance. 12th International AAAI Conference on Web and Social Media, ICWSM 2018 (2018), 72--81.Google ScholarGoogle Scholar
  22. Mikkel Flyverbom. 2016. Digital Age| Transparency: Mediation and the Management of Visibilities. Int J Commun 10, 0 (January 2016), 13.Google ScholarGoogle Scholar
  23. Archon Fung, Mary Graham, and David Weil. 2007. Full Disclosure: The Perils and Promise of Transparency.Google ScholarGoogle ScholarCross RefCross Ref
  24. Matthias Funk. 2020. How Many YouTube Channels Are There? Retrieved from https://www.tubics.com/blog/number-of-youtube-channels/Google ScholarGoogle Scholar
  25. Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media Soc 20, 12 (December 2018), 4492--4511. DOI:https://doi.org/10.1177/1461444818776611Google ScholarGoogle ScholarCross RefCross Ref
  26. Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. Retrieved from https://www.degruyter.com/document/doi/10.12987/9780300235029/htmlGoogle ScholarGoogle Scholar
  27. Tarleton Gillespie. 2020. Content moderation, AI, and the question of scale: https://doi.org/10.1177/2053951720943234 7, 2 (August 2020). DOI:https://doi.org/10.1177/2053951720943234Google ScholarGoogle ScholarCross RefCross Ref
  28. GMI Blogger. 2022. YouTube Statistics 2022. Retrieved October 24, 2022 from https://www.globalmediainsight.com/blog/youtube-users-statistics/Google ScholarGoogle Scholar
  29. Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc 7, 1 (January 2020), 205395171989794. DOI:https://doi.org/10.1177/2053951719897945Google ScholarGoogle ScholarCross RefCross Ref
  30. James Grimmelmann. 2015. The Virtues of Moderation. Yale Journal of Law and Technology 17, (2015). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/yjolt17&id=42&div=&collection=Google ScholarGoogle Scholar
  31. Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. Proc ACM Hum Comput Interact 5, CSCW2 (October 2021). DOI:https://doi.org/10.1145/3479610Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Hans Krause Hansen, Lars Thøger Christensen, and Mikkel Flyverbom. 2015. Introduction: Logics of transparency in late modernity: Paradoxes, mediation and governance. https://doi.org/10.1177/1368431014555254 18, 2 (April 2015), 117--131. DOI:https://doi.org/10.1177/1368431014555254Google ScholarGoogle ScholarCross RefCross Ref
  33. Maya Holikatti, Shagun Jhaver, and Neha Kumar. 2019. Learning to Airbnb by Engaging in Online Communities of Practice. Proc ACM Hum Comput Interact 3, CSCW (November 2019), 228. DOI:https://doi.org/10.1145/3359330Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Nicholas Jackson. 2011. Infographic: The History of Video Advertising on YouTube. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2011/08/infographic-the-history-of-video-advertising-on-youtube/242836/Google ScholarGoogle Scholar
  35. Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. Did you suspect the post would be removed?": Understanding user reactions to content removals on reddit. Proc ACM Hum Comput Interact 3, CSCW (November 2019), 1--33. DOI:https://doi.org/10.1145/3359294Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-machine collaboration for content regulation: The case of reddit automoderator. ACM Transactions on Computer-Human Interaction 26, 5 (July 2019), 1--35. DOI:https://doi.org/10.1145/3338243Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in moderation really matter?: User behavior after content removal explanations on reddit. Proc ACM Hum Comput Interact 3, CSCW (2019). DOI:https://doi.org/10.1145/3359252Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Shan Jiang, Ronald E Robertson, and Christo Wilson. 2019. Bias Misperceived: The Role of Partisanship and Misinformation in YouTube Comment Moderation.Google ScholarGoogle Scholar
  39. Lin Jin. 2020. The Creator Economy Needs a Middle Class. Harvard Business Review. Retrieved from https://hbr.org/2020/12/the-creator-economy-needs-a-middle-classGoogle ScholarGoogle Scholar
  40. Hilary Johnson and Peter Johnson. 1993. Explanation Facilities and Interactive Systems. In Proceedings of the 1st international conference on Intelligent user interfaces.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through the looking glass: Study of transparency in Reddit's moderation practices. Proc ACM Hum Comput Interact 4, GROUP (January 2020), 1--35. DOI:https://doi.org/10.1145/3375197Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. D. Bondy Valdovinos Kaye and Joanne E. Gray. 2021. Copyright Gossip: Exploring Copyright Opinions, Theories, and Strategies on YouTube: Soc Media Soc 7, 3 (August 2021). DOI:https://doi.org/10.1177/20563051211036940Google ScholarGoogle ScholarCross RefCross Ref
  43. Charles Kiene, Jialun Aaron Jiang, and Benjamin Mako Hill. 2019. Technological Frames and User Innovation: Exploring Technological Change in Community Moderation Teams. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019). DOI:https://doi.org/10.1145/3359146Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Rene F. Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2390--2395. DOI:https://doi.org/10.1145/2858036.2858402Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Susanne Kopf. 2020. Rewarding Good Creators": Corporate Social Media Discourse on Monetization Schemes for Content Creators. Soc Media Soc 6, 4 (October 2020), 205630512096987. DOI:https://doi.org/10.1177/2056305120969877Google ScholarGoogle ScholarCross RefCross Ref
  46. Yubo Kou. 2021. Punishment and Its Discontents: An Analysis of Permanent Ban in an Online Game Community. Proc ACM Hum Comput Interact 5, CSCW2 (October 2021). DOI:https://doi.org/10.1145/3476075Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Yubo Kou and Colin M. Gray. 2018. What do you recommend a complete beginner like me to practice?" Proc ACM Hum Comput Interact 2, CSCW (November 2018). DOI:https://doi.org/10.1145/3274363Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Yubo Kou and Xinning Gui. 2020. Mediating Community-AI Interaction through Situated Explanation: the Case of AI-Led Moderation. Proc ACM Hum Comput Interact 4, CSCW2 (October 2020), 1--27. DOI:https://doi.org/10.1145/3415173Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Kyle Langvardt. 2017. Regulating Online Content Moderation. Georgetown Law Journal 106, (2017). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/glj106&id=1367&div=39&collection=journalsGoogle ScholarGoogle Scholar
  50. Ralph LaRossa. 2005. Grounded Theory Methods and Qualitative Family Research. Journal of Marriage and Family 67, 4 (November 2005), 837--857. DOI:https://doi.org/10.1111/j.1741--3737.2005.00179.xGoogle ScholarGoogle ScholarCross RefCross Ref
  51. Min Kyung Lee, Anuraag Jain, Hae J.I.N. Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proc ACM Hum Comput Interact 3, CSCW (November 2019), 26. DOI:https://doi.org/10.1145/3359284Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Renkai Ma and Yubo Kou. 2021. How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation. PACM on Human Computer Interaction 5, CSCW2 (2021), 1--26. DOI:https://doi.org/https://doi.org/10.1145/3479573Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Renkai Ma and Yubo Kou. 2022. ?I'm not sure what difference is between their content and mine, other than the person itself": A Study of Fairness Perception of Content Moderation on YouTube. Proc ACM Hum Comput Interact 6, CSCW2 (2022), 28. DOI:https://doi.org/10.1145/3555150Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Renkai Ma and Yubo Kou. 2022. I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms": Bureaucracy of Creator Moderation on YouTube. In Companion Computer Supported Co-operative Work and Social Computing (CSCW'22 Companion). Retrieved from https://doi.org/10.1145/3500868.3559445Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Melissa J. Morgans. 2017. Freedom of Speech, the War on Terror, and What's YouTube Got to Do with It: American Censorship during Times of Military Conflict. Federal Communications Law Journal 69, (2017). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/fedcom69&id=163&div=&collection=Google ScholarGoogle Scholar
  56. Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media Soc 20, 11 (2018), 4366--4383. DOI:https://doi.org/10.1177/1461444818773059Google ScholarGoogle ScholarCross RefCross Ref
  57. Juho Pääkkönen, Matti Nelimarkka, Jesse Haapoja, and Airi Lampinen. 2020. Bureaucracy as a Lens for Analyzing and Designing Algorithmic Systems. Conference on Human Factors in Computing Systems - Proceedings (April 2020). DOI:https://doi.org/10.1145/3313831.3376780Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jessica A. Pater, Oliver L. Haimson, Nazanin Andalibi, and Elizabeth D. Mynatt. 2016. Hunger hurts but starving works": characterizing the presentation of eating disorders online. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, Association for Computing Machinery, New York, NY, USA, 1185--1200. DOI:https://doi.org/10.1145/2818048.2820030Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Hector Postigo. 2016. The socio-technical architecture of digital labor: Converting play into YouTube money. New Media Soc 18, 2 (2016), 332--349. DOI:https://doi.org/10.1177/1461444814541527Google ScholarGoogle ScholarCross RefCross Ref
  60. Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 1--13. DOI:https://doi.org/10.1145/3173574.3173677Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Aja Roman. 2019. YouTubers claim the site systematically demonetizes LGBTQ content. Vox. Retrieved from https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-reportGoogle ScholarGoogle Scholar
  62. Laura Savolainen. 2022. The shadow banning controversy: perceived governance and algorithmic folklore: Media Cult Soc (March 2022). DOI:https://doi.org/10.1177/01634437221077174Google ScholarGoogle ScholarCross RefCross Ref
  63. Mark Scott and Mike Isaac. 2016. Facebook Restores Iconic Vietnam War Photo It Censored for Nudity. The New York Times. Retrieved from https://www.nytimes.com/2016/09/10/technology/facebook-vietnam-war-photo-nudity.htmlGoogle ScholarGoogle Scholar
  64. Joseph Seering. 2020. Reconsidering Self-Moderation: the Role of Research in Supporting Community-Based Models for Online Content Moderation. Proc ACM Hum Comput Interact 4, CSCW2 (October 2020), 1--28. DOI:https://doi.org/10.1145/3415178Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Joseph Seering, Juan Pablo Flores, Saiph Savage, and Jessica Hammer. 2018. The Social Roles of Bots: Evaluating Impact of Bots on Discussions in Online Communities. Proc ACM Hum Comput Interact 2, CSCW (November 2018). DOI:https://doi.org/10.1145/3274426Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media Soc 21, 7 (July 2019), 1417--1443. DOI:https://doi.org/10.1177/1461444818821316Google ScholarGoogle ScholarCross RefCross Ref
  67. Nischal Shrestha, Titus Barik, and Chris Parnin. 2021. Remote, but Connected: How #TidyTuesday Provides an Online Community of Practice for Data Scientists. Proc ACM Hum Comput Interact 5, CSCW1 (April 2021), 1--31. DOI:https://doi.org/10.1145/3449126Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Spandana Singh. 2019. Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content. Retrieved from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/Google ScholarGoogle Scholar
  69. Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. Int J Commun 13, (2019). Retrieved from https://ijoc.org/index.php/ijoc/article/view/9736Google ScholarGoogle Scholar
  70. Lars Thøger Christensen. 2002. Corporate communication: The challenge of transparency. Corporate Communications: An International Journal 7, 3 (September 2002), 162--168. DOI:https://doi.org/10.1108/13563280210436772/FULL/XMLGoogle ScholarGoogle ScholarCross RefCross Ref
  71. Kaitlyn Tiffany. 2019. The YouTuber union isn't really a union, but it could be a big deal. Vox. Retrieved from https://www.vox.com/the-goods/2019/7/30/20747122/youtube-union-fairtube-ig-metall-instagramGoogle ScholarGoogle Scholar
  72. Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting Analysis.Google ScholarGoogle Scholar
  73. Rebecca Tushnet. 2019. Content Moderation in an Age of Extremes. Case Western Reserve Journal of Law, Technology and the Internet 10, (2019).Google ScholarGoogle Scholar
  74. Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. ?At the End of the Day Facebook Does What It Wants": How Users Experience Contesting Algorithmic Content Moderation. In Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery, 1--22. DOI:https://doi.org/10.1145/3415238Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Contestability For Content Moderation. Proc ACM Hum Comput Interact 5, CSCW2 (October 2021), 28. DOI:https://doi.org/10.1145/3476059Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Richard Ashby Wilson and Molly K. Land. 2021. Hate Speech on Social Media: Content Moderation in Context. Conn Law Rev (2021). Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3690616Google ScholarGoogle Scholar
  77. Lindsey Wotanis and Laurie McMillan. 2014. Performing Gender on YouTube. Fem Media Stud 14, 6 (November 2014), 912--928. DOI:https://doi.org/10.1080/14680777.2014.882373Google ScholarGoogle ScholarCross RefCross Ref
  78. Austin P. Wright, Omar Shaikh, Haekyu Park, Will Epperson, Muhammed Ahmed, Stephane Pinel, Duen Horng (Polo) Chau, and Diyi Yang. 2021. RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization. Proc ACM Hum Comput Interact 5, CSCW1 (April 2021), 1--26. DOI:https://doi.org/10.1145/3449280Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Jing Zeng and D. Bondy Valdovinos Kaye. 2022. From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy Internet 14, 1 (March 2022), 79--95. DOI:https://doi.org/10.1002/POI3.287Google ScholarGoogle ScholarCross RefCross Ref
  80. How to Appeal. onlinecensorship.org. Retrieved from https://onlinecensorship.org/resources/how-to-appealGoogle ScholarGoogle Scholar
  81. YouTube Partner Program overview & eligibility. YouTube Help. Retrieved from https://support.google.com/youtube/answer/72851?hl=enGoogle ScholarGoogle Scholar
  82. Advertiser-friendly content guidelines. YouTube Help. Retrieved from https://support.google.com/youtube/answer/6162278?hl=en#Adult&zippy=%2Cguide-to-self-certificationGoogle ScholarGoogle Scholar
  83. Request human review of videos marked Not suitable for most advertisers." YouTube Help. Retrieved from https://support.google.com/youtube/answer/7083671?hl=en#zippy=%2Chow-monetization-status-is-appliedGoogle ScholarGoogle Scholar
  84. Your content and Restricted mode. YouTube Help. Retrieved from https://support.google.com/youtube/answer/7354993?hl=en-GBGoogle ScholarGoogle Scholar
  85. Appeal Community Guidelines actions. YouTube Help. Retrieved from https://support.google.com/youtube/answer/185111?hl=en&ref_topic=9387060Google ScholarGoogle Scholar
  86. Get in touch with the YouTube Creator Support team. YouTube Help. Retrieved from https://support.google.com/youtube/answer/3545535?hl=en&co=GENIE.Platform%3DDesktop&oco=0#zippy=%2CemailGoogle ScholarGoogle Scholar
  87. What is a Content ID claim? YouTube Help. Retrieved from https://support.google.com/youtube/answer/6013276Google ScholarGoogle Scholar
  88. Protecting your identity. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2801895?hl=enGoogle ScholarGoogle Scholar
  89. The Santa Clara Principles on Transparency and Accountability in Content Moderation. Retrieved from https://santaclaraprinciples.org/Google ScholarGoogle Scholar

Index Terms

  1. "Defaulting to boilerplate answers, they didn't engage in a genuine conversation": Dimensions of Transparency Design in Creator Moderation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
      CSCW
      April 2023
      3836 pages
      EISSN:2573-0142
      DOI:10.1145/3593053
      Issue’s Table of Contents

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 16 April 2023
      Published in pacmhci Volume 7, Issue CSCW1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader