skip to main content
10.1145/3593013acmotherconferencesBook PagePublication PagesfacctConference Proceedingsconference-collections
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
ACM2023 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency Chicago IL USA June 12 - 15, 2023
ISBN:
979-8-4007-0192-4
Published:
12 June 2023
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN
Next Conference
Reflects downloads up to 20 Jan 2025Bibliometrics
abstract
Public Access
Machine Explanations and Human Understanding

Explanations are hypothesized to improve human understanding of machine learning models and achieve a variety of desirable outcomes, ranging from model debugging to enhancing human decision making. However, empirical studies have found mixed and even ...

abstract
Broadening AI Ethics Narratives: An Indic Art View

Incorporating interdisciplinary perspectives is seen as an essential step towards enhancing artificial intelligence (AI) ethics. In this regard, the field of arts is perceived to play a key role in elucidating diverse historical and cultural narratives, ...

research-article
Public Access
How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making

Discussion of the “right to an explanation” has been increasingly relevant because of its potential utility for auditing automated decision systems, as well as for making objections to such decisions. However, most existing work on explanations focuses ...

research-article
‘We are adults and deserve control of our phones’: Examining the risks and opportunities of a right to repair for mobile apps

Many mobile apps are designed not just to support end-users’ needs, but also commercial aims. This can result in app designs that compromise end-user privacy, safety, and well-being. Since apps nowadays provide vital digital information and services, ...

research-article
Fairness in machine learning from the perspective of sociology of statistics: How machine learning is becoming scientific by turning its back on metrological realism

We argue in this article that the integration of fairness into machine learning, or FairML, is a valuable exemplar of the politics of statistics and their ongoing transformations. Classically, statisticians sought to eliminate any trace of politics from ...

research-article
Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans

This paper concerns the double standard debate in the ethics of AI literature. This debate revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around ...

research-article
Open Access
Optimization’s Neglected Normative Commitments

Optimization is offered as an objective approach to resolving complex, real-world decisions involving uncertainty and conflicting interests. It drives business strategies as well as public policies and, increasingly, lies at the heart of sophisticated ...

research-article
Open Access
Welfarist Moral Grounding for Transparent AI

As popular calls for the transparency of AI systems gain prominence, it is important to think systematically about why transparency matters morally. I'll argue that welfarism provides a theoretical basis for doing so. For welfarists, it is morally ...

research-article
Open Access
Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application

Trust is an important factor in people’s interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab ...

research-article
Open Access
Multi-dimensional Discrimination in Law and Machine Learning - A Comparative Overview

AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms ...

research-article
Public Access
Reconciling Individual Probability Forecasts✱

Individual probabilities refer to the probabilities of outcomes that are realized only once: the probability that it will rain tomorrow, the probability that Alice will die within the next 12 months, the probability that Bob will be arrested for a ...

research-article
The Gradient of Generative AI Release: Methods and Considerations

As increasingly powerful generative AI systems are developed, the release method greatly varies. We propose a framework to assess six levels of access to generative AI systems: fully closed; gradual or staged access; hosted access; cloud-based or API ...

research-article
Open Access
In the Name of Fairness: Assessing the Bias in Clinical Record De-identification

Data sharing is crucial for open science and reproducible research, but the legal sharing of clinical data requires the removal of protected health information from electronic health records. This process, known as de-identification, is often achieved ...

research-article
Open Access
“How Biased are Your Features?”: Computing Fairness Influence Functions with Global Sensitivity Analysis

Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the ...

research-article
Open Access
Preventing Discriminatory Decision-making in Evolving Data Streams

Bias in machine learning has rightly received significant attention over the past decade. However, most fair machine learning (fair-ML) works to address bias in decision-making systems has focused solely on the offline setting. Despite the wide ...

research-article
WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and Democratic is FAccT?

Studies conducted on Western, Educated, Industrialized, Rich, and Democratic (WEIRD) samples are considered atypical of the world’s population and may not accurately represent human behavior. In this study, we aim to quantify the extent to which the ACM ...

research-article
Trustworthy AI and the Logics of Intersectional Resistance

Growing awareness of the capacity of AI to inflict harm has inspired efforts to delineate principles for ‘trustworthy AI’ and, from these, objective indicators of ‘trustworthiness’ for auditors and regulators. Such efforts run the risk of formalizing a ...

research-article
In her Shoes: Gendered Labelling in Crowdsourced Safety Perceptions Data from India

In recent years, a proliferation of women’s safety mobile applications have emerged in India that crowdsource street safety perceptions to generate ‘safety maps’ used by policy makers for urban design and academics for studying mobility patterns. Men ...

research-article
Public Access
The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions

We introduce dataset multiplicity, a way to study how inaccuracies, uncertainty, and social bias in training datasets impact test-time predictions. The dataset multiplicity framework asks a counterfactual question of what the set of resultant models (...

research-article
Open Access
"I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models

Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward historically marginalized communities. While previous work has primarily focused on harms related to age and race, emerging ...

research-article
Open Access
Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs

Amidst decline in public trust in technology, computing ethics have taken center stage, and critics have raised questions about corporate “ethics washing.” Yet few studies examine the actual implementation of AI ethics values in technology companies. ...

research-article
Open Access
Algorithmic Transparency from the South: Examining the state of algorithmic transparency in Chile's public administration algorithms

This paper presents the results and conclusions of the study on algorithmic transparency in public Administration and the use of automated decision systems within the State of Chile, carried out by the Public Innovation Laboratory of the Universidad ...

research-article
Open Access
Who Should Pay When Machines Cause Harm? Laypeople’s Expectations of Legal Damages for Machine-Caused Harm

The question of who should be held responsible when machines cause harm in high-risk environments is open to debate. Empirical research examining laypeople’s opinions has been largely restricted to the moral domain and has only inspected a limited set of ...

abstract
Open Access
Diagnosing AI Explanation Methods with Folk Concepts of Behavior

We investigate a formalism for the conditions of a successful explanation of AI. We consider “success” to depend not only on what information the explanation contains, but also on what information the human explainee understands from it. Theory of mind ...

research-article
Open Access
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study

Auditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to ...

research-article
The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices

The technical progression of artificial intelligence (AI) research has been built on breakthroughs in fields such as computer science, statistics, and mathematics. However, in the past decade AI researchers have increasingly looked to the social ...

research-article
Making Intelligence: Ethical Values in IQ and ML Benchmarks

In recent years, ML researchers have wrestled with defining and improving machine learning (ML) benchmarks and datasets. In parallel, some have trained a critical lens on the ethics of dataset creation and ML research. In this position paper, we ...

research-article
Open Access
Saliency Cards: A Framework to Characterize and Compare Saliency Methods

Saliency methods are a common class of machine learning interpretability techniques that calculate how important each input feature is to a model’s output. We find that, with the rapid pace of development, users struggle to stay informed of the strengths ...

research-article
Public Access
Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

Prediction models have been widely adopted as the basis for decision-making in domains as diverse as employment, education, lending, and health. Yet, few real world problems readily present themselves as precisely formulated prediction tasks. In ...

research-article
Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument

Recidivism risk assessment instruments are presented as an ‘evidence-based’ strategy for criminal justice reform – a way of increasing consistency in sentencing, replacing cash bail, and reducing mass incarceration. In practice, however, AI-centric ...

Index terms have been assigned to the content through auto-classification.

Recommendations