skip to main content
research-article

"I'm not sure what difference is between their content and mine, other than the person itself": A Study of Fairness Perception of Content Moderation on YouTube

Authors Info & Claims
Published:11 November 2022Publication History
Skip Abstract Section

Abstract

How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization.

References

  1. Salem Hamed Abdurrahim, Salina Abdul Samad, and Aqilah Baseri Huddin. 2018. Review on the effects of age, gender, and race demographics on automatic face recognition. Visual Computer 34, 1617--1630. DOI:https://doi.org/10.1007/s00371-017--1428-zGoogle ScholarGoogle ScholarCross RefCross Ref
  2. Julia Alexander. 2018. What is YouTube demonetization? An ongoing, comprehensive history. Polygon. Retrieved from https://www.polygon.com/2018/5/10/17268102/youtube-demonetization-pewdiepie-logan-paul-casey-neistat-philip-defrancoGoogle ScholarGoogle Scholar
  3. Julia Alexander. 2019. YouTube moderation bots punish videos tagged as "gay' or "lesbian,' study finds. The Verge. Retrieved from https://www.theverge.com/2019/9/30/20887614/youtube-moderation-lgbtq-demonetization-terms-words-nerd-city-investigationGoogle ScholarGoogle Scholar
  4. Julia Alexander. 2019. YouTube is disabling comments on almost all videos featuring children. The Verge. Retrieved from https://www.theverge.com/2019/2/28/18244954/youtube-comments-minor-children-exploitation-monetization-creatorsGoogle ScholarGoogle Scholar
  5. Julia Alexander. 2019. LGBTQ YouTubers are suing YouTube over alleged discrimination. The Verge. Retrieved from https://www.theverge.com/2019/8/14/20805283/lgbtq-youtuber-lawsuit-discrimination-alleged-video-recommendations-demonetizationGoogle ScholarGoogle Scholar
  6. Julia Alexander. 2020. YouTube is demonetizing videos about coronavirus, and creators are mad. Retrieved from https://www.theverge.com/2020/3/4/21164553/youtube-coronavirus-demonetization-sensitive-subjects-advertising-guidelines-revenueGoogle ScholarGoogle Scholar
  7. Ali Alkhatib and Michael Bernstein. 2019. Street--level algorithms: A theory at the gaps between policy and decisions. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300760Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Mike Ananny. 2016. Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. Sci. Technol. Hum. Values 41, 1 (September 2016), 93--117. DOI:https://doi.org/10.1177/0162243915606523Google ScholarGoogle ScholarCross RefCross Ref
  9. Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20, 3 (March 2018), 973--989. DOI:https://doi.org/10.1177/1461444816676645Google ScholarGoogle ScholarCross RefCross Ref
  10. Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Fem. Media Stud. (2021). DOI:https://doi.org/10.1080/14680777.2021.1928259Google ScholarGoogle Scholar
  11. Anna Veronica Banchik. 2020. Disappearing acts: Content moderation and emergent practices to preserve at-risk human rights--related content. New Media Soc. (March 2020), 146144482091272. DOI:https://doi.org/10.1177/1461444820912724Google ScholarGoogle Scholar
  12. Alex Barker and Hannah Murphy. 2020. YouTube reverts to human moderators in fight against misinformation. Financial Times. Retrieved August 4, 2021 from https://www.ft.com/content/e54737c5--8488--4e66-b087-d1ad426ac9faGoogle ScholarGoogle Scholar
  13. Karissa Bell. 2021. How the pandemic supercharged the creator economy in 2021. Engadget. Retrieved from https://www.engadget.com/how-the-pandemic-supercharged-the-creator-economy-153050958.htmlGoogle ScholarGoogle Scholar
  14. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociol. Methods Res. 50, 1 (2018), 3--44. DOI:https://doi.org/10.1177/0049124118782533Google ScholarGoogle ScholarCross RefCross Ref
  15. Niels Van Berkel, Jorge Goncalves, and Daniel Russo. 2021. Efect of information presentation on fairness perceptions of machine learning predictors. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2021) (May 2021). DOI:https://doi.org/10.1145/3411764.3445365Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Robert J. Bies and Debra L. Shapiro. 2017. Voice and Justification: Their Influence on Procedural Fairness Judgments. Acad. Manag. J. 31, 3 (November 2017), 676--685. DOI:https://doi.org/10.5465/256465Google ScholarGoogle Scholar
  17. Sophie Bishop. 2020. Algorithmic Experts: Selling Algorithmic Lore on YouTube. Soc. Media + Soc. 6, 1 (2020), 205630511989732. DOI:https://doi.org/10.1177/2056305119897323Google ScholarGoogle Scholar
  18. Sophie Bishop. 2021. Influencer Management Tools: Algorithmic Cultures, Brand Safety, and Bias. Soc. Media + Soc. 7, 1 (March 2021). DOI:https://doi.org/10.1177/20563051211003066Google ScholarGoogle Scholar
  19. Amy Bruckman, Pavel Curtis, Cliff Figallo, and Brenda Laurel. 1994. Approaches to managing deviant behavior in virtual communities. Association for Computing Machinery, New York, New York, USA. DOI:https://doi.org/10.1145/259963.260231Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jenna Burrell, Zoe Kahn, Anne Jonas, and Daniel Griffin. 2019. When Users Control the Algorithms: Values Expressed in Practices on Twitter. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 1--20. DOI:https://doi.org/10.1145/3359240Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Robyn Caplan and Tarleton Gillespie. 2020. Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy. Soc. Media + Soc. 6, 2 (2020). DOI:https://doi.org/10.1177/2056305120936636Google ScholarGoogle Scholar
  22. Ashley Carman. 2021. Facebook shorted video creators thousands of dollars in ad revenue. The Verge. Retrieved from https://www.theverge.com/2021/3/31/22358723/facebook-creators-video-revenue-estimate-tool-pagesGoogle ScholarGoogle Scholar
  23. Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (June 2017), 153--163. DOI:https://doi.org/10.1089/big.2016.0047Google ScholarGoogle ScholarCross RefCross Ref
  24. Kate Crawford and Tarleton Gillespie. 2016. What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media Soc. 18, 3 (March 2016), 410--428. DOI:https://doi.org/10.1177/1461444814543163Google ScholarGoogle ScholarCross RefCross Ref
  25. Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated Experiments on Ad Privacy Settings A Tale of Opacity, Choice, and Discrimination. In Proceedings on Privacy Enhancing Technologies, 92--112. Retrieved from http://www.google.com/settings/adsGoogle ScholarGoogle Scholar
  26. Bryan Dosono and Bryan Semaan. 2019. Moderation practices as emotional labor in sustaining online communities: The case of AAPI identity work on reddit. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2019) (May 2019), 1--13. DOI:https://doi.org/10.1145/3290605.3300372Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS 2012 - Innovations in Theoretical Computer Science Conference, ACM Press, New York, New York, USA, 214--226. DOI:https://doi.org/10.1145/2090236.2090255Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--14. DOI:https://doi.org/10.1145/3290605.3300724Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jenny Fan and Amy X. Zhang. 2020. Digital Juries: A Civics-Oriented Approach to Platform Governance. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2020), Association for Computing Machinery, New York, NY, USA, 1--14. DOI:https://doi.org/10.1145/3313831.3376293Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of Eating Disorders through Content Moderation. Proc. ACM Human-Computer Interact. 4, CSCW1 (May 2020). DOI:https://doi.org/10.1145/3392845Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Andreas Follesdal. 2015. John rawls' theory of justice as fairness. In Philosophy of Justice. Springer Netherlands, 311--328. DOI:https://doi.org/10.1007/978--94-017--9175--5_18Google ScholarGoogle Scholar
  32. Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen, Jia Deng, Erez Lieberman Aiden, and Li Fei-Fei. 2017. Using deep learning and google street view to estimate the demographic makeup of neighborhoods across the United States. Proc. Natl. Acad. Sci. U. S. A. 114, 50 (December 2017), 13108--13113. DOI:https://doi.org/10.1073/pnas.1700035114Google ScholarGoogle ScholarCross RefCross Ref
  33. Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media Soc. 20, 12 (December 2018), 4492--4511. DOI:https://doi.org/10.1177/1461444818776611Google ScholarGoogle ScholarCross RefCross Ref
  34. Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. Retrieved from https://www.degruyter.com/document/doi/10.12987/9780300235029/htmlGoogle ScholarGoogle Scholar
  35. Google. YouTube Community Guidelines enforcement -- Google Transparency Report. Retrieved from https://transparencyreport.google.com/youtube-policy/removals?hl=en&total_removed_videos=period:Y2019Q1;exclude_automated:all&lu=total_removed_videosGoogle ScholarGoogle Scholar
  36. Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc. 7, 1 (January 2020), 205395171989794. DOI:https://doi.org/10.1177/2053951719897945Google ScholarGoogle ScholarCross RefCross Ref
  37. Jerald Greenberg and Robert Folger. 1983. Procedural Justice, Participation, and the Fair Process Effect in Groups and Organizations. Basic Gr. Process. (1983), 235--256. DOI:https://doi.org/10.1007/978--1--4612--5578--9_10Google ScholarGoogle Scholar
  38. Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Web Conf. 2018 - Proc. World Wide Web Conf. WWW 2018 (April 2018), 903--912. DOI:https://doi.org/10.1145/3178876.3186138Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. James Grimmelmann. 2015. The Virtues of Moderation. Yale J. Law Technol. 17, (2015). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/yjolt17&id=42&div=&collection=Google ScholarGoogle Scholar
  40. Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021. Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. Proc. ACM Human-Computer Interact. 5, CSCW2 (October 2021). DOI:https://doi.org/10.1145/3479610Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miroslav Dudík, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, NY, USA, 1--16. DOI:https://doi.org/10.1145/3290605.3300830Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Daniel James. 2020. What Are YouTube Browse Features? Tubefluence. Retrieved from https://tubefluence.com/what-are-youtube-browse-features/Google ScholarGoogle Scholar
  43. Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. "Did you suspect the post would be removed?": Understanding user reactions to content removals on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 1--33. DOI:https://doi.org/10.1145/3359294Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-machine collaboration for content regulation: The case of reddit automoderator. ACM Trans. Comput. Interact. 26, 5 (July 2019), 1--35. DOI:https://doi.org/10.1145/3338243Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in moderation really matter?: User behavior after content removal explanations on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (2019). DOI:https://doi.org/10.1145/3359252Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Shagun Jhaver, Yoni Karpfen, and Judd Antin. 2018. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Lin Jin. 2020. The Creator Economy Needs a Middle Class. Harvard Business Review. Retrieved from https://hbr.org/2020/12/the-creator-economy-needs-a-middle-classGoogle ScholarGoogle Scholar
  48. Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through the looking glass: Study of transparency in Reddit's moderation practices. Proc. ACM Human-Computer Interact. 4, GROUP (January 2020), 1--35. DOI:https://doi.org/10.1145/3375197Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Maria Kasinidou, Styliani Kleanthous, Plnar Barlas, and Jahna Otterbacher. 2021. "I agree with the decision, but they didn't deserve this": Future Developers' Perception of Fairness in Algorithmic Decisions. FAccT 2021 - Proc. 2021 ACM Conf. Fairness, Accountability, Transpar. (March 2021), 690--700. DOI:https://doi.org/10.1145/3442188.3445931Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Tae Yeol Kim and Kwok Leung. 2007. Forming and reacting to overall fairness: A cross-cultural comparison. Organ. Behav. Hum. Decis. Process. 104, 1 (September 2007), 83--95. DOI:https://doi.org/10.1016/J.OBHDP.2007.01.004Google ScholarGoogle ScholarCross RefCross Ref
  51. Keith Kirkpatrick. 2016. Battling algorithmic bias. Communications of the ACM 59, 16--17. DOI:https://doi.org/10.1145/2983270Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Cliff Lampe and Erik Johnston. 2005. Follow the (Slash) dot: Effects of Feedback on New Members in an Online Community. Proc. 2005 Int. ACM Siggr. Conf. Support. Gr. Work - Gr. '05 (2005). DOI:https://doi.org/10.1145/1099203Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Cliff Lampe and Paul Resnick. 2004. Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space. In Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, ACM Press, New York, New York, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Kyle Langvardt. 2017. Regulating Online Content Moderation. Georgetown Law J. 106, (2017). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/glj106&id=1367&div=39&collection=journalsGoogle ScholarGoogle Scholar
  55. Ralph LaRossa. 2005. Grounded Theory Methods and Qualitative Family Research. J. Marriage Fam. 67, 4 (November 2005), 837--857. DOI:https://doi.org/10.1111/j.1741--3737.2005.00179.xGoogle ScholarGoogle ScholarCross RefCross Ref
  56. Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management: https://doi.org/10.1177/2053951718756684 5, 1 (March 2018). DOI:https://doi.org/10.1177/2053951718756684Google ScholarGoogle Scholar
  57. Min Kyung Lee and Su Baykal. 2017. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, ACM, New York, NY, USA. Retrieved from http://dx.doi.org/10.1145/2998181.2998230Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Min Kyung Lee, Anuraag Jain, Hae J.I.N. Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proc. ACM Human-Computer Interact. 3, CSCW (November 2019), 26. DOI:https://doi.org/10.1145/3359284Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A Human-Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management that Allocates Donations to Non-Profit Organizations. Proc. 2017 CHI Conf. Hum. Factors Comput. Syst. (2017). DOI:https://doi.org/10.1145/3025453Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Gerald S. Leventhal. 1980. What Should Be Done with Equity Theory? Soc. Exch. (1980), 27--55. DOI:https://doi.org/10.1007/978--1--4613--3087--5_2Google ScholarGoogle Scholar
  61. E. Allan Lind, Ruth Kanfer, and P. Christopher Earley. 1990. Voice, Control, and Procedural Justice: Instrumental and Noninstrumental Concerns in Fairness Judgments. J. Pers. Soc. Psychol. 59, 5 (1990), 952--959. DOI:https://doi.org/10.1037/0022--3514.59.5.952Google ScholarGoogle ScholarCross RefCross Ref
  62. Renkai Ma and Yubo Kou. 2021. "How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation. PACM Hum. Comput. Interact. 5, CSCW2 (2021), 1--26. DOI:https://doi.org/https://doi.org/10.1145/3479573Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Claire Cain Miller. 2015. The Upshot: Can an Algorithm Hire Better Than a Human? The New York Times. Retrieved from https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.htmlGoogle ScholarGoogle Scholar
  64. Viginia Murphy-Berman, John J. Berman, Purnima Singh, Anju Pachauri, and Pramod Kumar. 1984. Factors affecting allocation to needy and meritorious recipients: A cross-cultural comparison. J. Pers. Soc. Psychol. 46, 6 (June 1984), 1267--1272. DOI:https://doi.org/10.1037/0022--3514.46.6.1267Google ScholarGoogle ScholarCross RefCross Ref
  65. Arvind Narayanan. 2018. 21 fairness definition and their politics. ACM FAT* 2018 tutorial. Retrieved from https://shubhamjain0594.github.io/post/tlds-arvind-fairness-definitions/Google ScholarGoogle Scholar
  66. Casey Newton. 2019. The secret lives of Facebook moderators in America. The Verge. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizonaGoogle ScholarGoogle Scholar
  67. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science (80-. ). 366, 6464 (October 2019), 447--453. DOI:https://doi.org/10.1126/science.aax2342Google ScholarGoogle Scholar
  68. Ocelot AI. 2019. Demonetization report. Retrieved from https://docs.google.com/document/d/18B-X77K72PUCNIV3tGonzeNKNkegFLWuLxQ_evhF3AY/editGoogle ScholarGoogle Scholar
  69. Hector Postigo. 2016. The socio-technical architecture of digital labor: Converting play into YouTube money. New Media Soc. 18, 2 (2016), 332--349. DOI:https://doi.org/10.1177/1461444814541527Google ScholarGoogle ScholarCross RefCross Ref
  70. Molly Priddy. 2017. Why Is YouTube Demonetizing LGBTQ Videos? Autostraddle. Retrieved from https://www.autostraddle.com/why-is-youtube-demonetizing-lgbtqia-videos-395058/Google ScholarGoogle Scholar
  71. John Rawls. 1971. A Theory of Justice.Google ScholarGoogle Scholar
  72. Sarah Roberts. 2016. Commercial Content Moderation: Digital Laborers' Dirty Work. Media Stud. Publ. (January 2016). Retrieved from https://ir.lib.uwo.ca/commpub/12Google ScholarGoogle Scholar
  73. Aja Roman. 2019. YouTubers claim the site systematically demonetizes LGBTQ content. Vox. Retrieved from https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-reportGoogle ScholarGoogle Scholar
  74. Howard Rosenbaum. 2020. Algorithmic neutrality, algorithmic assemblages, and the lifeworld. AMCIS 2020 Proc. (August 2020). Retrieved from https://aisel.aisnet.org/amcis2020/philosophical_is/philosophical_is/6Google ScholarGoogle Scholar
  75. Howard Rosenbaum and Pnina Fichman. 2019. Algorithmic accountability and digital justice: A critical assessment of technical and sociotechnical approaches. Proc. Assoc. Inf. Sci. Technol. 56, 1 (January 2019), 237--244. DOI:https://doi.org/10.1002/PRA2.19Google ScholarGoogle ScholarCross RefCross Ref
  76. Debjani Saha, Candice Schumann, Duncan Mcelfresh, John Dickerson, Michelle Mazurek, and Michael Tschantz. 2020. Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics. In Proceedings of the 37th International Conference on Machine Learning, PMLR, 8377--8387. Retrieved from http://proceedings.mlr.press/v119/saha20c.htmlGoogle ScholarGoogle Scholar
  77. Mia Sato. 2021. YouTube reveals millions of incorrect copyright claims in six months. The Verge. Retrieved from https://www.theverge.com/2021/12/6/22820318/youtube-copyright-claims-transparency-reportGoogle ScholarGoogle Scholar
  78. Nripsuta Ani Saxena, Goran Radanovic, Karen Huang, David C. Parkes, Evan DeFilippis, and Yang Liu. 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Association for Computing Machinery, Inc, New York, NY, USA. DOI:https://doi.org/10.1145/3306618.3314248Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media Soc. 21, 7 (July 2019), 1417--1443. DOI:https://doi.org/10.1177/1461444818821316Google ScholarGoogle ScholarCross RefCross Ref
  80. Lucas Shaw. 2021. The Pandemic Has Been Very, Very Good for the Creator Economy. Bloomberg. Retrieved from https://www.bloomberg.com/news/newsletters/2021-08--29/the-pandemic-has-been-very-very-good-for-the-creator-economyGoogle ScholarGoogle Scholar
  81. Catherine Shu. 2017. YouTube responds to complaints that its Restricted Mode censors LGBT videos. TechCrunch. Retrieved from https://techcrunch.com/2017/03/19/youtube-lgbt-restricted-mode/Google ScholarGoogle Scholar
  82. Spandana Singh. 2019. Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content. Retrieved from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/Google ScholarGoogle Scholar
  83. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. 2018. Report of the Special Rapporteur to the General Assembly on AI and its impact on freedom of opinion and expression. United Nations Human Rights Office of The High Commissioner. Retrieved August 4, 2021 from https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ReportGA73.aspxGoogle ScholarGoogle Scholar
  84. Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical notions vs. Human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, New York, NY, USA, 2459--2468. DOI:https://doi.org/10.1145/3292500.3330664Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Miriah Steiger, Timir J. Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The psychological well-being of content moderators the emotional labor of commercial moderation and avenues for improving support. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2021) (May 2021). DOI:https://doi.org/10.1145/3411764.3445092Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. Int. J. Commun. 13, (2019). Retrieved from https://ijoc.org/index.php/ijoc/article/view/9736Google ScholarGoogle Scholar
  87. Lydia Sweatt. 2021. YouTube Algorithm Guide: How Your Videos Are Recommended to Viewers. vidIQ. Retrieved from https://vidiq.com/blog/post/how-youtube-algorithm-recommends-videos/Google ScholarGoogle Scholar
  88. Jeanna Sybert. 2021. The demise of #NSFW: Contested platform governance and Tumblr's 2018 adult content ban: New Media Soc. (February 2021). DOI:https://doi.org/10.1177/1461444821996715Google ScholarGoogle Scholar
  89. John Thibaut and Laurens Walker. 1978. A Theory of Procedure. Calif. Law Rev. 66, (1978). Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/calr66&id=555&div=36&collection=journalsGoogle ScholarGoogle ScholarCross RefCross Ref
  90. Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting Analysis.Google ScholarGoogle Scholar
  91. Tom R. Tyler. 1988. What Is Procedural Justice?: Criteria Used by Citizens to Assess the Fairness of Legal Procedures. Law Soc. Rev. (1988).Google ScholarGoogle Scholar
  92. Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. ?At the End of the Day Facebook Does What It Wants": How Users Experience Contesting Algorithmic Content Moderation. In Proceedings of the ACM on Human-Computer Interaction, Association for Computing Machinery, 1--22. DOI:https://doi.org/10.1145/3415238Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Contestability For Content Moderation. Proc. ACM Human-Computer Interact. 5, CSCW2 (October 2021), 28. DOI:https://doi.org/10.1145/3476059Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. IEEE/ACM Int. Work. Softw. Fairness 18, (2018). DOI:https://doi.org/10.1145/3194770.3194776Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. 2020. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. CHI Conf. Hum. Factors Comput. Syst. Proc. (CHI 2020) (2020). DOI:https://doi.org/10.1145/3313831Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Donghee Yvette Wohn. 2019. Volunteer moderators in twitch micro communities: How they get involved, the roles they play, and the emotional labor they experience. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Association for Computing Machinery, New York, New York, USA, 1--13. DOI:https://doi.org/10.1145/3290605.3300390Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Allison Woodruff, Sarah E Fox, Steven Rousso-Schindler, and Jeff Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. Proc. 2018 CHI Conf. Hum. Factors Comput. Syst. (2018). DOI:https://doi.org/10.1145/3173574Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. Lucas Wright. 2022. Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator. Soc. Media + Soc. 8, 1 (February 2022). DOI:https://doi.org/10.1177/20563051221077020Google ScholarGoogle Scholar
  100. Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019. Agent, gatekeeper, drug dealer: How content creators craft algorithmic personas. Proc. ACM Human-Computer Interact. 3, CSCW (2019), 1--27. DOI:https://doi.org/10.1145/3359321Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In 26th International World Wide Web Conference, WWW 2017, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1171--1180. DOI:https://doi.org/10.1145/3038912.3052660Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. Conrad Ziller. 2017. Equal Treatment Regulations and Ethnic Minority Social Trust. Eur. Sociol. Rev. 33, 4 (August 2017), 563--575. DOI:https://doi.org/10.1093/ESR/JCX059Google ScholarGoogle Scholar
  103. Procedural Justice. Yale Law School. Retrieved from https://law.yale.edu/justice-collaboratory/procedural-justiceGoogle ScholarGoogle Scholar
  104. YouTube analytics basics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9002587?hl=enGoogle ScholarGoogle Scholar
  105. Understand ad revenue analytics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9314357?hl=enGoogle ScholarGoogle Scholar
  106. How engagement metrics are counted. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2991785?hl=enGoogle ScholarGoogle Scholar
  107. How to Develop a YouTube Monetization Strategy for Your Channel. Retrieved from https://www.tastyedits.com/youtube-monetization-strategy/Google ScholarGoogle Scholar
  108. "Limited or no ads" explained. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9269824?hl=enGoogle ScholarGoogle Scholar
  109. Your content and Restricted mode. YouTube Help. Retrieved from https://support.google.com/youtube/answer/7354993?hl=en-GBGoogle ScholarGoogle Scholar
  110. Request human review of videos marked "Not suitable for most advertisers." YouTube Help. Retrieved from https://support.google.com/youtube/answer/7083671?hl=en#zippy=%2Chow-monetization-status-is-appliedGoogle ScholarGoogle Scholar
  111. YouTube Self-Certification overview. YouTube Help. Retrieved from https://support.google.com/youtube/answer/7687980?hl=enGoogle ScholarGoogle Scholar
  112. YouTube Community Guidelines & Policies. How YouTube Works. Retrieved from https://www.youtube.com/howyoutubeworks/policies/community-guidelines/Google ScholarGoogle Scholar
  113. YouTube Partner Manager overview. YouTube Help. Retrieved from https://support.google.com/youtube/answer/6361049?hl=enGoogle ScholarGoogle Scholar
  114. Watching "made for kids" content. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9632097?hl=enGoogle ScholarGoogle Scholar
  115. Advertiser-friendly content guidelines. YouTube Help. Retrieved from https://support.google.com/youtube/answer/6162278?hl=en#Adult&zippy=%2Cguide-to-self-certificationGoogle ScholarGoogle Scholar
  116. Upcoming and recent ad guideline updates. YouTube Help. Retrieved from https://support.google.com/youtube/answer/9725604?hl=en#February2021Google ScholarGoogle Scholar
  117. Restrictions on live streaming. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2853834?hl=enGoogle ScholarGoogle Scholar
  118. Copyright strike basics. YouTube Help. Retrieved from https://support.google.com/youtube/answer/2814000Google ScholarGoogle Scholar
  119. Get in touch with the YouTube Creator Support team. YouTube Help. Retrieved from https://support.google.com/youtube/answer/3545535?hl=en&co=GENIE.Platform%3DDesktop&oco=0#zippy=%2CemailGoogle ScholarGoogle Scholar
  120. Discovery and performance FAQs. YouTube Help. Retrieved from https://support.google.com/youtube/answer/141805?hl=enGoogle ScholarGoogle Scholar

Index Terms

  1. "I'm not sure what difference is between their content and mine, other than the person itself": A Study of Fairness Perception of Content Moderation on YouTube

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue CSCW2
      CSCW
      November 2022
      8205 pages
      EISSN:2573-0142
      DOI:10.1145/3571154
      Issue’s Table of Contents

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 November 2022
      Published in pacmhci Volume 6, Issue CSCW2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader