skip to main content
10.1145/3626246.3654738acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
short-paper
Open access

Responsible Model Selection with Virny and VirnyView

Published: 09 June 2024 Publication History

Abstract

In this demonstration, we present a comprehensive software library for model auditing and responsible model selection, called Virny, along with an interactive tool called VirnyView. Our library is modular and extensible, it implements a rich set of performance and fairness metrics, including novel metrics that quantify and compare model stability and uncertainty, and enables performance analysis based on multiple sensitive attributes, and their intersections. The Virny library and the VirnyView tool are available at https://github.com/DataResponsibly/Virny and https://r-ai.co/VirnyView.

Supplemental Material

MP4 File
In this video presentation, we introduce a comprehensive software library for model auditing and responsible model selection called Virny, along with an interactive tool called VirnyView. The demonstration offers a detailed walkthrough of the responsible model selection pipeline, illustrating how our tool empowers data scientists to navigate the complexities of model selection under different performance dimensions, particularly focusing on the widely-used ACS Income fair-ML benchmark.

References

[1]
Falaah Arif Khan, Denys Herasymuk, and Julia Stoyanovich. 2023. On Fairness and Stability: Is Estimator Variance a Friend or a Foe? CoRR, Vol. abs/2302.04525 (2023). https://doi.org/10.48550/ARXIV.2302.04525 showeprint[arXiv]2302.04525
[2]
Rachel K. E. Bellamy et al. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev., Vol. 63, 4/5 (2019), 4:1--4:15. https://doi.org/10.1147/JRD.2019.2942287
[3]
Irene Chen, Fredrik D Johansson, and David Sontag. 2018. Why is my classifier discriminatory? Advances in neural information processing systems, Vol. 31 (2018).
[4]
Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, Vol. 5, 2 (2017), 153--163. https://doi.org/10.1089/big.2016.0047
[5]
Anamaria Crisan, Margaret Drouhard, Jesse Vig, and Nazneen Rajani. 2022. Interactive model cards: A human-centered approach to model documentation. In ACM FAccT. 427--439.
[6]
Michael C. Darling and David J. Stracuzzi. 2018. Toward Uncertainty Quantification for Supervised Classification.
[7]
Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. 2021. Retiring adult: New datasets for fair machine learning. NeurIPS, Vol. 34 (2021), 6478--6490.
[8]
Pedro Domingos. 2000. A Unifeid Bias-Variance Decomposition and its Applications. 231--238.
[9]
Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. 2015. Preserving statistical validity in adaptive data analysis. In ACM TC. 117--126.
[10]
Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press.
[11]
A Feder Cooper, Solon Barocas, Christopher De Sa, and Siddhartha Sen. 2023. Variance, Self-Consistency, and Arbitrariness in Fair Classification. arXiv e-prints (2023), arXiv--2301.
[12]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In ACM SIGKDD (KDD '15). 259--268. https://doi.org/10.1145/2783258.2783311
[13]
Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In ITCS (LIPIcs, Vol. 67). 43:1--43:23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
[14]
Yan Li, Matthew Sperrin, Darren M Ashcroft, and Tjeerd Pieter Van Staa. 2020. Consistency of variety of machine learning and statistical models in predicting clinical risks of individual patients: longitudinal cohort study using cardiovascular disease as exemplar. BMJ, Vol. 371 (2020).
[15]
Huiting Liu, Avinesh P. V. S., Siddharth Patwardhan, Peter Grasch, and Sachin Agarwal. 2022. Model Stability with Continuous Data Updates. CoRR, Vol. abs/2201.05692 (2022). showeprint[arXiv]2201.05692 https://arxiv.org/abs/2201.05692
[16]
Rupert G Miller. 1974. The jackknife-a review. Biometrika, Vol. 61, 1 (1974), 1--15.
[17]
Margaret Mitchell et al. 2019. Model cards for model reporting. In ACM FAT.
[18]
Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T Rodolfa, and Rayid Ghani. 2018. Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577 (2018).
[19]
Julia Stoyanovich and Bill Howe. 2019. Nutritional Labels for Data and Models. IEEE Data Eng. Bull., Vol. 42, 3 (2019).
[20]
Anique Tahir, Lu Cheng, and Huan Liu. 2023. Fairness through Aleatoric Uncertainty. In ACM CIKM. 2372--2381. https://doi.org/10.1145/3583780.3614875
[21]
Hilde J. P. Weerts, Miroslav Dud'i k, Richard Edgar, Adrin Jalali, Roman Lutz, and Michael Madaio. 2023. Fairlearn: Assessing and Improving Fairness of AI Systems. J. Mach. Learn. Res., Vol. 24 (2023). http://jmlr.org/papers/v24/23-0389.html
[22]
Linda F Wightman. 1998. LSAC National Longitudinal Bar Passage Study. (1998).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGMOD/PODS '24: Companion of the 2024 International Conference on Management of Data
June 2024
694 pages
ISBN:9798400704222
DOI:10.1145/3626246
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 June 2024

Check for updates

Author Tags

  1. data-centric ai
  2. fairness
  3. model selection
  4. robustness
  5. stability

Qualifiers

  • Short-paper

Funding Sources

  • NSF
  • UL Research Institutes through the Center for Advancing Safety of Machine Intelligence

Conference

SIGMOD/PODS '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 785 of 4,003 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 239
    Total Downloads
  • Downloads (Last 12 months)239
  • Downloads (Last 6 weeks)38
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media