skip to main content
10.1145/3313831.3376813acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences

Published: 23 April 2020 Publication History

Abstract

Algorithmic decision-making systems are increasingly used throughout the public and private sectors to make important decisions or assist humans in making these decisions with real social consequences. While there has been substantial research in recent years to build fair decision-making algorithms, there has been less research seeking to understand the factors that affect people's perceptions of fairness in these systems, which we argue is also important for their broader acceptance. In this research, we conduct an online experiment to better understand perceptions of fairness, focusing on three sets of factors: algorithm outcomes, algorithm development and deployment procedures, and individual differences. We find that people rate the algorithm as more fair when the algorithm predicts in their favor, even surpassing the negative effects of describing algorithms that are very biased against particular demographic groups. We find that this effect is moderated by several variables, including participants' education level, gender, and several aspects of the development procedure. Our findings suggest that systems that evaluate algorithmic fairness through users' feedback must consider the possibility of "outcome favorability" bias.

Supplementary Material

ZIP File (pn9056aux.zip)
The supplementary material includes the questionnaire we used in the experiment for measuring computer literacy. The three questions on computer skills are adapted from [2] and [1]. The two questions on computer familiarity are adapted from [3]. Since our study is specifically about decision-making algorithms, which is different from previous studies measuring computer literacy, we developed three new questions focusing on participants? knowledge of algorithms.

References

[1]
2019a. Amazon Mechanical Turk. (2019). https://www.mturk.com/worker/help
[2]
2019b. NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI) (nsf19571). (2019). https://www.nsf.gov/pubs/2019/nsf19571/nsf19571.htm
[3]
J Stacy Adams. 1965. Inequity in social exchange. In Advances in experimental social psychology. Vol. 2. Academic Press, 267--299.
[4]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 (2018).
[5]
Ifeoma Ajunwa, Sorelle Friedler, Carlos E Scheidegger, and Suresh Venkatasubramanian. 2016. Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN (2016). https://papers.ssrn.com/abstract=2746078
[6]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. (May 2016). https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencing
[7]
C Daniel Batson, Nadia Ahmad, Jodi Yin, Steven J Bedell, Jennifer W Johnson, and Christie M Templin. 1999. Two threats to the common good: Self-interested egoism and empathy-induced altruism. Personality and Social Psychology Bulletin 25, 1 (1999), 3--16.
[8]
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6541--6549.
[9]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018a. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research (2018).
[10]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018b. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research (2018), 1--24.
[11]
Quoctrung Bui. 2017. How Many Americans Would Pass an Immigration Test Endorsed by Trump? The New York Times (2017). https://www.nytimes.com/interactive/2017/08/23/ upshot/immigration-quiz-raise-act-trump.html
[12]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77--91.
[13]
Pew Research Center. 2016a. Mechanical Turk: Research in the Crowdsourcing Age. (July 2016). https://www.pewinternet.org/2016/07/11/ research-in-the-crowdsourcing-age-a-case-study/
[14]
Pew Research Center. 2016b. On Views of Race and Inequality, Blacks and Whites Are Worlds Apart. (June 2016). https://www.pewsocialtrends.org/2016/06/27/onviews-of-race-and-inequality-blacks-and-whites-areworlds-apart/.
[15]
Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018).
[16]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 797--806.
[17]
Bo Cowgill. 2018. Bias and Productivity in Humans and Algorithms: Theory and Evidence from Resume Screening. Columbia Business School, Columbia University 29 (2018).
[18]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters (Oct. 2018). https://www.reuters.com/article/us-amazon-comjobs-automation-insight/amazon-scraps-secret-airecruiting-tool-that-showed-bias-against-womenidUSKCN1MK08G.
[19]
Morton Deutsch. 1985. Distributive justice: A social-psychological perspective. Vol. 437. Yale University Press New Haven, CT.
[20]
Michael A DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. Algorithms ruin everything:# RIPTwitter, folk theories, and resistance to algorithmic change in social media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3163--3174.
[21]
Michael A DeVito, Ashley Marie Walker, and Jeremy Birnholtz. 2018. 'Too Gay for Facebook': Presenting LGBTQ+ Identity Throughout the Personal Social Media Ecosystem. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 44.
[22]
Kristina A Diekmann, Steven M Samuels, Lee Ross, and Max H Bazerman. 1997. Self-interest and fairness in problems of resource allocation: allocators versus recipients. Journal of personality and social psychology 72, 5 (1997), 1061.
[23]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1 (2015), 114.
[24]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2016. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science 64, 3 (2016), 1155--1170.
[25]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 214--226.
[26]
Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. 2018. Decoupled classifiers for group-fair and efficient machine learning. In Conference on Fairness, Accountability and Transparency. 119--133.
[27]
Isil Erel, Lea H. Stern, Chenhao Tan, and Michael S. Weisbach. 2018. Research: Could Machine Learning Help Companies Select Better Board Directors? Harvard Business Review (April 2018). https://hbr.org/2018/04/research-could-machinelearning-help-companies-select-better-board-directors.
[28]
Justin Esarey, Timothy C Salmon, and Charles Barrilleaux. 2012. What Motivates Political Preferences? Self-Interest, Ideology, and Fairness in a Laboratory Democracy. Economic Inquiry 50, 3 (2012), 604--624.
[29]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I like it, then I hide it: Folk theories of social feeds. In Proceedings of the 2016 CHI conference on human factors in computing systems. ACM, 2371--2382.
[30]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 494.
[31]
Robert Folger and Jerald Greenberg. 1985. Procedural justice: An interpretive analysis of personnel systems. (1985).
[32]
Nikolaus Franke, Peter Keinz, and Katharina Klausberger. 2013. ?Does this sound like a fair deal?": Antecedents and consequences of fairness expectations in the individual's decision to participate in firm innovation. Organization science 24, 5 (2013), 1495--1516.
[33]
Megan French and Jeff Hancock. 2017. What's the folk theory? Reasoning about cyber-social systems. Available at SSRN (2017). https://papers.ssrn.com/abstract=2910571
[34]
Stephen W Gilliland. 1993. The perceived fairness of selection systems: An organizational justice perspective. Academy of management review 18, 4 (1993), 694--734.
[35]
Nina Grgic-Hlaca, Elissa M Redmiles, Krishna P Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference. International World Wide Web Conferences Steering Committee, 903--912.
[36]
Moritz Hardt, Eric Price, Nati Srebro, and others. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323.
[37]
Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In European Conference on Computer Vision. Springer, 3--19.
[38]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 600.
[39]
George C Homans. 1974. Social behavior: Its elementary forms. (1974).
[40]
Youyang Hou, Cliff Lampe, Maximilian Bulinski, and James J Prescott. 2017. Factors in Fairness and Emotion in Online Case Resolution Systems. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2511--2522.
[41]
Dan Hurley. 2018. Can an Algorithm Tell When Kids Are in Danger? The New York Times (Jan. 2018). https://www.nytimes.com/2018/01/02/magazine/ can-an-algorithm-tell-when-kids-are-in-danger.html
[42]
Joichi Ito. 2016. Society in the Loop Artificial Intelligence. Joi Ito's Web (June 2016).
[43]
Anil Kalhan. 2013. Immigration policing and federalism through the lens of technology, surveillance, and privacy. Ohio St. LJ 74 (2013), 1105.
[44]
René F Kizilcec. 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2390--2395.
[45]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2017. Human decisions and machine predictions. The quarterly journal of economics 133, 1 (2017), 237--293.
[46]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
[47]
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 1885--1894.
[48]
Nathan R Kuncel, David M Klieger, and Deniz S Ones. 2014. In hiring, algorithms beat instinct. Harvard business review 92, 5 (2014), 32. https://hbr.org/2014/05/in-hiring-algorithms-beat-instinct
[49]
Kai Lamertz. 2002. The social construction of fairness: Social influence and sense making in organizations. Journal of Organizational Behavior 23, 1 (2002), 19--37.
[50]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 1--16.
[51]
Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 1035--1048.
[52]
Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3365--3376.
[53]
Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 2015 CHI Conference on Human Factors in Computing Systems. ACM, 1603--1612.
[54]
Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland, and Patrick Vinck. 2018. Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology 31, 4 (2018), 611--627.
[55]
Melvin J Lerner. 1974. The justice motive:"Equity" and? parity" among children. Journal of Personality and Social Psychology 29, 4 (1974), 539.
[56]
Brenda Major and Kay Deaux. 1982. Individual differences in justice behavior. In Equity and justice in social behavior. Academic Press, 43--76.
[57]
Louise Matsakis. 2016. The Unknown, Poorly Paid Labor Force Powering Academic Research. (Feb. 2016). https://motherboard.vice.com/en_us/article/8q8ggb/ the-unknown-poorly-paid-labor-force-powering-academic-research
[58]
Aditya Krishna Menon and Robert C Williamson. 2018. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency. 107--118.
[59]
Alex P. Miller. 2018. Want Less-Biased Decisions? Use Algorithms. Harvard Business Review (July 2018). https://hbr.org/2018/07/ want-less-biased-decisions-use-algorithms
[60]
Carolina Moliner, Vicente Martínez-Tur, José M Peiró, José Ramos, and Russell Cropanzano. 2013. Perceived Reciprocity and Well-Being at Work in Non-Professional Employees: Fairness or Self-Interest? Stress and Health 29, 1 (2013), 31--39.
[61]
Diana C Mutz and Jeffery J Mondak. 1997. Dimensions of sociotropic behavior: Group-based judgements of fairness and well-being. American Journal of Political Science (1997), 284--308.
[62]
Rupert W Nacoste. 1987. But do they care about fairness? The dynamics of preferential treatment and minority interest. Basic and Applied Social Psychology 8, 3 (1987), 177--191.
[63]
Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues 56, 1 (2000), 81--103.
[64]
Rachel O'Dwyer. 2018. Algorithms are making the same mistakes as humans assessing credit scores. (2018). https://qz.com/1276781/algorithms-are-making-thesame-mistakes-assessing-credit-scores-that-humansdid-a-century-ago/.
[65]
Emma Pierson. 2017. Demographics and discussion influence views on algorithmic fairness. arXiv preprint arXiv:1712.09124 (2017).
[66]
Carrie Pomeroy. 2019. How community members in Ramsey County stopped a big-data plan from flagging students as at-risk. (Feb. 2019). https://www.tcdailyplanet.net/how-communitymembers-in-ramsey-county-stopped-a-big-data-planfrom-flagging-students-as-at-risk/.
[67]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018).
[68]
Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology 20, 1 (2018), 5--14.
[69]
Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.
[70]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135--1144.
[71]
Lauren A Rivera. 2012. Hiring as cultural matching: The case of elite professional service firms. American sociological review 77, 6 (2012), 999--1022.
[72]
Ismael Rodriguez-Lara and Luis Moreno-Garrido. 2012. Self-interest and fairness: self-serving choices of justice principles. Experimental Economics 15, 1 (2012), 158--175.
[73]
E Elisabet Rutström and Melonie B Williams. 2000. Entitlements and fairness:: an experimental study of distributive preferences. Journal of Economic Behavior & Organization 43, 1 (2000), 75--89.
[74]
Gunar Schirner, Deniz Erdogmus, Kaushik Chowdhury, and Taskin Padir. 2013. The future of human-in-the-loop cyber-physical systems. Computer 1 (2013), 36--45.
[75]
Daniel B Shank. 2012. Perceived Justice and Reactions to Coercive Computers. In Sociological Forum, Vol. 27. Wiley Online Library, 372--391.
[76]
Linda J Skitka. 1999. Ideological and attributional boundaries on public compassion: Reactions to individuals and communities affected by a natural disaster. Personality and Social Psychology Bulletin 25, 7 (1999), 793--808.
[77]
Linda J Skitka, Jennifer Winquist, and Susan Hutchinson. 2003. Are outcome fairness and outcome favorability distinguishable psychological constructs? A meta-analytic review. Social Justice Research 16, 4 (2003), 309--341.
[78]
Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. arXiv preprint arXiv:1902.04783 (2019).
[79]
Meng-Jung Tsai, Ching-Yeh Wang, and Po-Fen Hsu. 2019. Developing the computer programming self-efficacy scale for computer literacy education. Journal of Educational Computing Research 56, 8 (2019), 1345--1360.
[80]
Tom R Tyler. 2000. Social justice: Outcome and procedure. International journal of psychology 35, 2 (2000), 117--125.
[81]
Ilja Van Beest and Eric Van Dijk. 2007. Self-interest and fairness in coalition formation: A social utility approach to understanding partner selection and payoff allocations in groups. European Review of Social Psychology 18, 1 (2007), 132--174.
[82]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 440.
[83]
Elaine Walster, G William Walster, and Ellen Berscheid. 1978. Equity: Theory and research. (1978).
[84]
Connie R Wanberg, Mark B Gavin, and Larry W Bunce. 1999. Perceived fairness of layoffs among individuals who have been laid off: A longitudinal study. Personnel Psychology 52, 1 (1999), 59--84.
[85]
Ann Wilkinson, Julia Roberts, and Alison E While. 2010. Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning. Computers in Human Behavior 26, 6 (2010), 1369--1376.
[86]
Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 194.

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2025)Procedural fairness in algorithmic decision-making: the role of public engagementEthics and Information Technology10.1007/s10676-024-09811-427:1Online publication date: 1-Mar-2025
  • (2024)The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume ScreeningSustainability10.3390/su1609384016:9(3840)Online publication date: 2-May-2024
  • Show More Cited By

Index Terms

  1. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
      April 2020
      10688 pages
      ISBN:9781450367080
      DOI:10.1145/3313831
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 April 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. algorithm development
      2. algorithmic decision-making
      3. algorithmoutcome
      4. perceived fairness

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      CHI '20
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)2,956
      • Downloads (Last 6 weeks)252
      Reflects downloads up to 18 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
      • (2025)Procedural fairness in algorithmic decision-making: the role of public engagementEthics and Information Technology10.1007/s10676-024-09811-427:1Online publication date: 1-Mar-2025
      • (2024)The Impact of Artificial Intelligence Replacing Humans in Making Human Resource Management Decisions on Fairness: A Case of Resume ScreeningSustainability10.3390/su1609384016:9(3840)Online publication date: 2-May-2024
      • (2024)The impact of features used by algorithms on perceptions of fairnessProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/42(376-384)Online publication date: 3-Aug-2024
      • (2024)Judged by Robots: Preferences and Perceived Fairness of Algorithmic versus Human PunishmentsReview of Law & Economics10.1515/rle-2024-0063Online publication date: 4-Dec-2024
      • (2024)Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and OutlookAUC IURIDICA10.14712/23366478.2024.2470:2(85-99)Online publication date: 23-May-2024
      • (2024)Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder ExplorationProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685334(1-13)Online publication date: 13-Oct-2024
      • (2024)Why does the robot only select men? How women and men perceive autonomous social robots that have a gender biasProceedings of Mensch und Computer 202410.1145/3670653.3677492(479-484)Online publication date: 1-Sep-2024
      • (2024)Lay User Involvement in Developing Human-Centric Responsible AI Systems: When and How?ACM Journal on Responsible Computing10.1145/3652592Online publication date: 15-Mar-2024
      • (2024)Do Crowdsourced Fairness Preferences Correlate with Risk Perceptions?Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645209(304-324)Online publication date: 18-Mar-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media