Abstract
How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization.
- Salem Hamed Abdurrahim, Salina Abdul Samad, and Aqilah Baseri Huddin. 2018. Review on the effects of age, gender, and race demographics on automatic face recognition. Visual Computer 34, 1617--1630. DOI:https://doi.org/10.1007/s00371-017--1428-zGoogle ScholarCross Ref
- Julia Alexander. 2018. What is YouTube demonetization? An ongoing, comprehensive history. Polygon. Retrieved from https://www.polygon.com/2018/5/10/17268102/youtube-demonetization-pewdiepie-logan-paul-casey-neistat-philip-defrancoGoogle Scholar
- Julia Alexander. 2019. YouTube moderation bots punish videos tagged as "gay' or "lesbian,' study finds. The Verge. Retrieved from https://www.theverge.com/2019/9/30/20887614/youtube-moderation-lgbtq-demonetization-terms-words-nerd-city-investigationGoogle Scholar
- Julia Alexander. 2019. YouTube is disabling comments on almost all videos featuring children. The Verge. Retrieved from https://www.theverge.com/2019/2/28/18244954/youtube-comments-minor-children-exploitation-monetization-creatorsGoogle Scholar
- Julia Alexander. 2019. LGBTQ YouTubers are suing YouTube over alleged discrimination. The Verge. Retrieved from https://www.theverge.com/2019/8/14/20805283/lgbtq-youtuber-lawsuit-discrimination-alleged-video-recommendations-demonetizationGoogle Scholar
- Julia Alexander. 2020. YouTube is demonetizing videos about coronavirus, and creators are mad. Retrieved from https://www.theverge.com/2020/3/4/21164553/youtube-coronavirus-demonetization-sensitive-subjects-advertising-guidelines-revenueGoogle Scholar
- Ali Alkhatib and Michael Bernstein. 2019. Street--level algorithms: A theory at the gaps between policy and decisions. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300760Google ScholarDigital Library
- Mike Ananny. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Sci. Technol. Hum. Values 41, 1 (September 2016), 93--117. DOI:https://doi.org/10.1177/0162243915606523Google ScholarCross Ref
- Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20, 3 (March 2018), 973--989. DOI:https://doi.org/10.1177/1461444816676645Google ScholarCross Ref
- Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Fem. Media Stud. (2021). DOI:https://doi.org/10.1080/14680777.2021.1928259Google Scholar
- Anna Veronica Banchik. 2020. Disappearing acts: Content moderation and emergent practices to preserve at-risk human rights--related content. New Media Soc. (March 2020), 146144482091272. DOI:https://doi.org/10.1177/1461444820912724Google Scholar
- Alex Barker and Hannah Murphy. 2020. YouTube reverts to human moderators in fight against misinformation. Financial Times. Retrieved August 4, 2021 from https://www.ft.com/content/e54737c5--8488--4e66-b087-d1ad426ac9faGoogle Scholar
- Karissa Bell. 2021. How the pandemic supercharged the creator economy in 2021. Engadget. Retrieved from https://www.engadget.com/how-the-pandemic-supercharged-the-creator-economy-153050958.htmlGoogle Scholar
- Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociol. Methods Res. 50, 1 (2018), 3--44. DOI:https://doi.org/10.1177/0049124118782533Google ScholarCross Ref
- Niels Van Berkel, Jorge Goncalves, and Daniel Russo. 2021. Efect of information presentation on fairness perceptions of machine learning predictors. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2021) (May 2021). DOI:https://doi.org/10.1145/3411764.3445365Google ScholarDigital Library
- Robert J. Bies and Debra L. Shapiro. 2017. Voice and Justification: Their Influence on Procedural Fairness Judgments. Acad. Manag. J. 31, 3 (November 2017), 676--685. DOI:https://doi.org/10.5465/256465Google Scholar
- Sophie Bishop. 2020. Algorithmic Experts: Selling Algorithmic Lore on YouTube. Soc. Media + Soc. 6, 1 (2020), 205630511989732. DOI:https://doi.org/10.1177/2056305119897323Google Scholar
- Sophie Bishop. 2021. Influencer Management Tools: Algorithmic Cultures, Brand Safety, and Bias. Soc. Media + Soc. 7, 1 (March 2021). DOI:https://doi.org/10.1177/20563051211003066Google Scholar
- Amy Bruckman, Pavel Curtis, Cliff Figallo, and Brenda Laurel. 1994. Approaches to managing deviant behavior in virtual communities. Association for Computing Machinery, New York, New York, USA. DOI:https://doi.org/10.1145/259963.260231Google ScholarDigital Library
- Jenna Burrell, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 1--20. DOI:https://doi.org/10.1145/3359240Google ScholarDigital Library
- Robyn Caplan and Tarleton Gillespie. 2020. Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy. Soc. Media + Soc. 6, 2 (2020). DOI:https://doi.org/10.1177/2056305120936636Google Scholar
- Ashley Carman. 2021. Facebook shorted video creators thousands of dollars in ad revenue. The Verge. Retrieved from https://www.theverge.com/2021/3/31/22358723/facebook-creators-video-revenue-estimate-tool-pagesGoogle Scholar
- Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (June 2017), 153--163. DOI:https://doi.org/10.1089/big.2016.0047Google ScholarCross Ref
- Kate Crawford and Tarleton Gillespie. 2016. What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media Soc. 18, 3 (March 2016), 410--428. DOI:https://doi.org/10.1177/1461444814543163Google ScholarCross Ref
- Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated Experiments on Ad Privacy Settings A Tale of Opacity, Choice, and Discrimination. In Proceedings on Privacy Enhancing Technologies, 92--112. Retrieved from http://www.google.com/settings/adsGoogle Scholar
- Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on reddit. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2019) (May 2019), 1--13. DOI:https://doi.org/10.1145/3290605.3300372Google ScholarDigital Library
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS 2012 - Innovations in Theoretical Computer Science Conference, ACM Press, New York, New York, USA, 214--226. DOI:https://doi.org/10.1145/2090236.2090255Google ScholarDigital Library
- Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--14. DOI:https://doi.org/10.1145/3290605.3300724Google ScholarDigital Library
- Jenny Fan and Amy X. Zhang. 2020. Digital Juries: A Civics-Oriented Approach to Platform Governance. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2020), Association for Computing Machinery, New York, NY, USA, 1--14. DOI:https://doi.org/10.1145/3313831.3376293Google ScholarDigital Library
- Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of Eating Disorders through Content Moderation. Proc. ACM Human-Computer Interact. 4, CSCW1 (May 2020). DOI:https://doi.org/10.1145/3392845Google ScholarDigital Library
- Andreas Follesdal. 2015. John rawls' theory of justice as fairness. In Philosophy of Justice. Springer Netherlands, 311--328. DOI:https://doi.org/10.1007/978--94-017--9175--5_18Google Scholar
- Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden, and Li Fei-Fei. 2017. Using deep learning and google street view to estimate the demographic makeup of neighborhoods across the United States. Proc. Natl. Acad. Sci. U. S. A. 114, 50 (December 2017), 13108--13113. DOI:https://doi.org/10.1073/pnas.1700035114Google ScholarCross Ref
- Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media Soc. 20, 12 (December 2018), 4492--4511. DOI:https://doi.org/10.1177/1461444818776611Google ScholarCross Ref
- Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. Retrieved from https://www.degruyter.com/document/doi/10.12987/9780300235029/htmlGoogle Scholar
- Google. YouTube Community Guidelines enforcement -- Google Transparency Report. Retrieved from https://transparencyreport.google.com/youtube-policy/removals?hl=en&total_removed_videos=period:Y2019Q1;exclude_automated:all&lu=total_removed_videosGoogle Scholar
- Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc. 7, 1 (January 2020), 205395171989794. DOI:https://doi.org/10.1177/2053951719897945Google ScholarCross Ref
- Jerald Greenberg and Robert Folger. 1983. Procedural Justice, Participation, and the Fair Process Effect in Groups and Organizations. Basic Gr. Process. (1983), 235--256. DOI:https://doi.org/10.1007/978--1--4612--5578--9_10Google Scholar
- Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Web Conf. 2018 - Proc. World Wide Web Conf. WWW 2018 (April 2018), 903--912. DOI:https://doi.org/10.1145/3178876.3186138Google ScholarDigital Library
- James Grimmelmann. 2015. The Virtues of Moderation. Yale J. Law Technol. 17, (2015). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/yjolt17&id=42&div=&collection=Google Scholar
- Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. Proc. ACM Human-Computer Interact. 5, CSCW2 (October 2021). DOI:https://doi.org/10.1145/3479610Google ScholarDigital Library
- Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miroslav Dudík, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--16. DOI:https://doi.org/10.1145/3290605.3300830Google ScholarDigital Library
- Daniel James. 2020. What Are YouTube Browse Features? Tubefluence. Retrieved from https://tubefluence.com/what-are-youtube-browse-features/Google Scholar
- Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. "Did you suspect the post would be removed?": Understanding user reactions to content removals on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 1--33. DOI:https://doi.org/10.1145/3359294Google ScholarDigital Library
- Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-machine collaboration for content regulation: The case of reddit automoderator. ACM Trans. Comput. Interact. 26, 5 (July 2019), 1--35. DOI:https://doi.org/10.1145/3338243Google ScholarDigital Library
- Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in moderation really matter?: User behavior after content removal explanations on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (2019). DOI:https://doi.org/10.1145/3359252Google ScholarDigital Library
- Shagun Jhaver, Yoni Karpfen, and Judd Antin. 2018. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarDigital Library
- Lin Jin. 2020. The Creator Economy Needs a Middle Class. Harvard Business Review. Retrieved from https://hbr.org/2020/12/the-creator-economy-needs-a-middle-classGoogle Scholar
- Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through the looking glass: Study of transparency in Reddit's moderation practices. Proc. ACM Human-Computer Interact. 4, GROUP (January 2020), 1--35. DOI:https://doi.org/10.1145/3375197Google ScholarDigital Library
- Maria Kasinidou, Styliani Kleanthous, Plnar Barlas, and Jahna Otterbacher. 2021. "I agree with the decision, but they didn't deserve this": Future Developers' Perception of Fairness in Algorithmic Decisions. FAccT 2021 - Proc. 2021 ACM Conf. Fairness, Accountability, Transpar. (March 2021), 690--700. DOI:https://doi.org/10.1145/3442188.3445931Google ScholarDigital Library
- Tae Yeol Kim and Kwok Leung. 2007. Forming and reacting to overall fairness: A cross-cultural comparison. Organ. Behav. Hum. Decis. Process. 104, 1 (September 2007), 83--95. DOI:https://doi.org/10.1016/J.OBHDP.2007.01.004Google ScholarCross Ref
- Keith Kirkpatrick. 2016. Battling algorithmic bias. Communications of the ACM 59, 16--17. DOI:https://doi.org/10.1145/2983270Google ScholarDigital Library
- Cliff Lampe and Erik Johnston. 2005. Follow the (Slash) dot: Effects of Feedback on New Members in an Online Community. Proc. 2005 Int. ACM Siggr. Conf. Support. Gr. Work - Gr. '05 (2005). DOI:https://doi.org/10.1145/1099203Google ScholarDigital Library
- Cliff Lampe and Paul Resnick. 2004. Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space. In Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, ACM Press, New York, New York, USA.Google ScholarDigital Library
- Kyle Langvardt. 2017. Regulating Online Content Moderation. Georgetown Law J. 106, (2017). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/glj106&id=1367&div=39&collection=journalsGoogle Scholar
- Ralph LaRossa. 2005. Grounded Theory Methods and Qualitative Family Research. J. Marriage Fam. 67, 4 (November 2005), 837--857. DOI:https://doi.org/10.1111/j.1741--3737.2005.00179.xGoogle ScholarCross Ref
- Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management: https://doi.org/10.1177/2053951718756684 5, 1 (March 2018). DOI:https://doi.org/10.1177/2053951718756684Google Scholar
- Min Kyung Lee and Su Baykal. 2017. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, ACM, New York, NY, USA. Retrieved from http://dx.doi.org/10.1145/2998181.2998230Google ScholarDigital Library
- Min Kyung Lee, Anuraag Jain, Hae J.I.N. Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 26. DOI:https://doi.org/10.1145/3359284Google ScholarDigital Library
- Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A Human-Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management that Allocates Donations to Non-Profit Organizations. Proc. 2017 CHI Conf. Hum. Factors Comput. Syst. (2017). DOI:https://doi.org/10.1145/3025453Google ScholarDigital Library
- Gerald S. Leventhal. 1980. What Should Be Done with Equity Theory? Soc. Exch. (1980), 27--55. DOI:https://doi.org/10.1007/978--1--4613--3087--5_2Google Scholar
- E. Allan Lind, Ruth Kanfer, and P. Christopher Earley. 1990. Voice, Control, and Procedural Justice: Instrumental and Noninstrumental Concerns in Fairness Judgments. J. Pers. Soc. Psychol. 59, 5 (1990), 952--959. DOI:https://doi.org/10.1037/0022--3514.59.5.952Google ScholarCross Ref
- Renkai Ma and Yubo Kou. 2021. "How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation. PACM Hum. Comput. Interact. 5, CSCW2 (2021), 1--26. DOI:https://doi.org/https://doi.org/10.1145/3479573Google ScholarDigital Library
- Claire Cain Miller. 2015. The Upshot: Can an Algorithm Hire Better Than a Human? The New York Times. Retrieved from https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.htmlGoogle Scholar
- Viginia Murphy-Berman, John J. Berman, Purnima Singh, Anju Pachauri, and Pramod Kumar. 1984. Factors affecting allocation to needy and meritorious recipients: A cross-cultural comparison. J. Pers. Soc. Psychol. 46, 6 (June 1984), 1267--1272. DOI:https://doi.org/10.1037/0022--3514.46.6.1267Google ScholarCross Ref
- Arvind Narayanan. 2018. 21 fairness definition and their politics. ACM FAT* 2018 tutorial. Retrieved from https://shubhamjain0594.github.io/post/tlds-arvind-fairness-definitions/Google Scholar
- Casey Newton. 2019. The secret lives of Facebook moderators in America. The Verge. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizonaGoogle Scholar
- Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science (80-. ). 366, 6464 (October 2019), 447--453. DOI:https://doi.org/10.1126/science.aax2342Google Scholar
- Ocelot AI. 2019. Demonetization report. Retrieved from https://docs.google.com/document/d/18B-X77K72PUCNIV3tGonzeNKNkegFLWuLxQ_evhF3AY/editGoogle Scholar
- Hector Postigo. 2016. The socio-technical architecture of digital labor: Converting play into YouTube money. New Media Soc. 18, 2 (2016), 332--349. DOI:https://doi.org/10.1177/1461444814541527Google ScholarCross Ref
- Molly Priddy. 2017. Why Is YouTube Demonetizing LGBTQ Videos? Autostraddle. Retrieved from https://www.autostraddle.com/why-is-youtube-demonetizing-lgbtqia-videos-395058/Google Scholar
- John Rawls. 1971. A Theory of Justice.Google Scholar
- Sarah Roberts. 2016. Commercial Content Moderation: Digital Laborers' Dirty Work. Media Stud. Publ. (January 2016). Retrieved from https://ir.lib.uwo.ca/commpub/12Google Scholar
- Aja Roman. 2019. YouTubers claim the site systematically demonetizes LGBTQ content. Vox. Retrieved from https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-reportGoogle Scholar
- Howard Rosenbaum. 2020. Algorithmic neutrality, algorithmic assemblages, and the lifeworld. AMCIS 2020 Proc. (August 2020). Retrieved from https://aisel.aisnet.org/amcis2020/philosophical_is/philosophical_is/6Google Scholar
- Howard Rosenbaum and Pnina Fichman. 2019. Algorithmic accountability and digital justice: A critical assessment of technical and sociotechnical approaches. Proc. Assoc. Inf. Sci. Technol. 56, 1 (January 2019), 237--244. DOI:https://doi.org/10.1002/PRA2.19Google ScholarCross Ref
- Debjani Saha, Candice Schumann, Duncan Mcelfresh, John Dickerson, Michelle Mazurek, and Michael Tschantz. 2020. Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics. In Proceedings of the 37th International Conference on Machine Learning, PMLR, 8377--8387. Retrieved from http://proceedings.mlr.press/v119/saha20c.htmlGoogle Scholar
- Mia Sato. 2021. YouTube reveals millions of incorrect copyright claims in six months. The Verge. Retrieved from https://www.theverge.com/2021/12/6/22820318/youtube-copyright-claims-transparency-reportGoogle Scholar
- Nripsuta Ani Saxena, Goran Radanovic, Karen Huang, David C. Parkes, Evan DeFilippis, and Yang Liu. 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Association for Computing Machinery, Inc, New York, NY, USA. DOI:https://doi.org/10.1145/3306618.3314248Google ScholarDigital Library
- Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media Soc. 21, 7 (July 2019), 1417--1443. DOI:https://doi.org/10.1177/1461444818821316Google ScholarCross Ref
- Lucas Shaw. 2021. The Pandemic Has Been Very, Very Good for the Creator Economy. Bloomberg. Retrieved from https://www.bloomberg.com/news/newsletters/2021-08--29/the-pandemic-has-been-very-very-good-for-the-creator-economyGoogle Scholar
- Catherine Shu. 2017. YouTube responds to complaints that its Restricted Mode censors LGBT videos. TechCrunch. Retrieved from https://techcrunch.com/2017/03/19/youtube-lgbt-restricted-mode/Google Scholar
- Spandana Singh. 2019. Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content. Retrieved from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/Google Scholar
- Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. 2018. Report of the Special Rapporteur to the General Assembly on AI and its impact on freedom of opinion and expression. United Nations Human Rights Office of The High Commissioner. Retrieved August 4, 2021 from https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ReportGA73.aspxGoogle Scholar
- Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical notions vs. Human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, New York, NY, USA, 2459--2468. DOI:https://doi.org/10.1145/3292500.3330664Google ScholarDigital Library
- Miriah Steiger, Timir J. Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The psychological well-being of content moderators the emotional labor of commercial moderation and avenues for improving support. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2021) (May 2021). DOI:https://doi.org/10.1145/3411764.3445092Google ScholarDigital Library
- Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. Int. J. Commun. 13, (2019). Retrieved from https://ijoc.org/index.php/ijoc/article/view/9736Google Scholar
- Lydia Sweatt. 2021. YouTube Algorithm Guide: How Your Videos Are Recommended to Viewers. vidIQ. Retrieved from https://vidiq.com/blog/post/how-youtube-algorithm-recommends-videos/Google Scholar
- Jeanna Sybert. 2021. The demise of #NSFW: Contested platform governance and Tumblr's 2018 adult content ban: New Media Soc. (February 2021). DOI:https://doi.org/10.1177/1461444821996715Google Scholar
- John Thibaut and Laurens Walker. 1978. A Theory of Procedure. Calif. Law Rev. 66, (1978). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/calr66&id=555&div=36&collection=journalsGoogle ScholarCross Ref
- Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting Analysis.Google Scholar
- Tom R. Tyler. 1988. What Is Procedural Justice?: Criteria Used by Citizens to Assess the Fairness of Legal Procedures. Law Soc. Rev. (1988).Google Scholar
- Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. ?At the End of the Day Facebook Does What It Wants": How Users Experience Contesting Algorithmic Content Moderation. In Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery, 1--22. DOI:https://doi.org/10.1145/3415238Google ScholarDigital Library
- Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Contestability For Content Moderation. Proc. ACM Human-Computer Interact. 5, CSCW2 (October 2021), 28. DOI:https://doi.org/10.1145/3476059Google ScholarDigital Library
- Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarDigital Library
- Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. IEEE/ACM Int. Work. Softw. Fairness 18, (2018). DOI:https://doi.org/10.1145/3194770.3194776Google ScholarDigital Library
- Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. 2020. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2020) (2020). DOI:https://doi.org/10.1145/3313831Google ScholarDigital Library
- Donghee Yvette Wohn. 2019. Volunteer moderators in twitch micro communities: How they get involved, the roles they play, and the emotional labor they experience. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, New York, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300390Google ScholarDigital Library
- Allison Woodruff, Sarah E Fox, Steven Rousso-Schindler, and Jeff Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarDigital Library
- Lucas Wright. 2022. Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator. Soc. Media + Soc. 8, 1 (February 2022). DOI:https://doi.org/10.1177/20563051221077020Google Scholar
- Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019. Agent, gatekeeper, drug dealer: How content creators craft algorithmic personas. Proc. ACM Human-Computer Interact. 3, CSCW (2019), 1--27. DOI:https://doi.org/10.1145/3359321Google ScholarDigital Library
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In 26th International World Wide Web Conference, WWW 2017, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1171--1180. DOI:https://doi.org/10.1145/3038912.3052660Google ScholarDigital Library
- Conrad Ziller. 2017. Equal Treatment Regulations and Ethnic Minority Social Trust. Eur. Sociol. Rev. 33, 4 (August 2017), 563--575. DOI:https://doi.org/10.1093/ESR/JCX059Google Scholar
- Procedural Justice. Yale Law School. Retrieved from https://law.yale.edu/justice-collaboratory/procedural-justiceGoogle Scholar
- YouTube analytics basics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9002587?hl=enGoogle Scholar
- Understand ad revenue analytics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9314357?hl=enGoogle Scholar
- How engagement metrics are counted. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2991785?hl=enGoogle Scholar
- How to Develop a YouTube Monetization Strategy for Your Channel. Retrieved from https://www.tastyedits.com/youtube-monetization-strategy/Google Scholar
- "Limited or no ads" explained. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9269824?hl=enGoogle Scholar
- Your content and Restricted mode. YouTube Help. Retrieved from https://support.google.com/youtube/answer/7354993?hl=en-GBGoogle Scholar
- Request human review of videos marked "Not suitable for most advertisers." YouTube Help. Retrieved from https://support.google.com/youtube/answer/7083671?hl=en#zippy=%2Chow-monetization-status-is-appliedGoogle Scholar
- YouTube Self-Certification overview. YouTube Help. Retrieved from https://support.google.com/youtube/answer/7687980?hl=enGoogle Scholar
- YouTube Community Guidelines & Policies. How YouTube Works. Retrieved from https://www.youtube.com/howyoutubeworks/policies/community-guidelines/Google Scholar
- YouTube Partner Manager overview. YouTube Help. Retrieved from https://support.google.com/youtube/answer/6361049?hl=enGoogle Scholar
- Watching "made for kids" content. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9632097?hl=enGoogle Scholar
- Advertiser-friendly content guidelines. YouTube Help. Retrieved from https://support.google.com/youtube/answer/6162278?hl=en#Adult&zippy=%2Cguide-to-self-certificationGoogle Scholar
- Upcoming and recent ad guideline updates. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9725604?hl=en#February2021Google Scholar
- Restrictions on live streaming. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2853834?hl=enGoogle Scholar
- Copyright strike basics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2814000Google Scholar
- Get in touch with the YouTube Creator Support team. YouTube Help. Retrieved from https://support.google.com/youtube/answer/3545535?hl=en&co=GENIE.Platform%3DDesktop&oco=0#zippy=%2CemailGoogle Scholar
- Discovery and performance FAQs. YouTube Help. Retrieved from https://support.google.com/youtube/answer/141805?hl=enGoogle Scholar
Index Terms
- "I'm not sure what difference is between their content and mine, other than the person itself": A Study of Fairness Perception of Content Moderation on YouTube
Recommendations
"Defaulting to boilerplate answers, they didn't engage in a genuine conversation": Dimensions of Transparency Design in Creator Moderation
CSCWTransparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid ...
"How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation
CSCW2To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives ...
“I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms”: Bureaucracy of Creator Moderation on YouTube
CSCW'22 Companion: Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social ComputingRecent HCI studies have recognized an analogy between bureaucracy and algorithmic systems; given platformization of content creators, video sharing platforms like YouTube and TikTok practice creator moderation, i.e., an assemblage of algorithms that ...
Comments