skip to main content
10.1145/3341525.3393998acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
abstract

Auditing the COMPAS Recidivism Risk Assessment Tool: Predictive Modelling and Algorithmic Fairness in CS1

Published: 15 June 2020 Publication History

Abstract

We present an assignment in which students apply predictive modelling to build a model that predicts re-arrest of criminal defendants using real data. Students assess the algorithmic fairness of a real-world criminal risk assessment tool (RAT), and reproduce results from an impactful story in ProPublica and a 2018 Science Advances paper. Students explore different measures of algorithmic fairness, and adjust the model they build to satisfy the false positive parity measure.
Our target audience is students in Introduction to Data Science courses that do not require previous computing experience, as well as students in standard CS1 courses. We advocate for teaching predictive modelling in CS1. To facilitate the teaching of predictive modelling in CS1, we provide tutorials on predictive modelling and algorithmic fairness, in both Python and Java; we also provide a simplified "Learning Machine" API in those languages.
Our approach enables teaching algorithmic fairness and predictive modelling more generally very early in the students' computing career. A companion website with all our teaching materials is available at https://PredictiveModellingEarly.github.io/.

References

[1]
Julia Angwin and Jeff Larson. 2016. Machine bias: There's software used across the country to predict future criminals and it's biased against blacks. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[2]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, Vol. 5, 2 (2017), 153--163.
[3]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018).
[4]
Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science Advances, Vol. 4, 1 (2018).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ITiCSE '20: Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education
June 2020
615 pages
ISBN:9781450368742
DOI:10.1145/3341525
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 June 2020

Check for updates

Author Tags

  1. CS1
  2. algorithmic fairness
  3. data science
  4. predictive modelling

Qualifiers

  • Abstract

Conference

ITiCSE '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 552 of 1,613 submissions, 34%

Upcoming Conference

ITiCSE '25
Innovation and Technology in Computer Science Education
June 27 - July 2, 2025
Nijmegen , Netherlands

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 262
    Total Downloads
  • Downloads (Last 12 months)35
  • Downloads (Last 6 weeks)6
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media