skip to main content
10.1145/3462203.3475894acmconferencesArticle/Chapter ViewAbstractPublication PagesgooditConference Proceedingsconference-collections
short-paper

Assessing Algorithmic Fairness without Sensitive Information

Published:09 September 2021Publication History

ABSTRACT

As the prevalence of algorithmic decision-making increases, so does the study of algorithmic fairness. When this aspect is disregarded, bias and discrimination are created, reproduced or amplified. Accordingly, work has been done to harmonize definitions of fairness and categorize ways to improve it. While using demographic data about the protected group is a possible solution, in real-world applications privacy concerns as well as uncertainty about the relevant attributes make it unrealistic. Consequently, we seek in this work to provide an overview of the methods that do not require such data, to identify which areas might be under-researched and to propose research questions for the first phase of the PhD. The influence of datasets size in the discovery and mitigation of unknown biases appears to be such an area, one that we plan to explore more fully during the thesis.

References

  1. Christine Allen, Carly Ahmad, Muhammad Eckert, Juhua Hu, and Vikas Kumar. 2020. fairMLHealth: Tools and tutorials for evaluation of fairness and bias in healthcare applications of machine learning models. https://github.com/KenSciResearch/fairMLHealth.Google ScholarGoogle Scholar
  2. Alhanoof Althnian, Duaa AlSaeed, Heyam Al-Baity, Amani Samha, Alanoud Bin Dris, Najla Alzakari, Afnan Abou Elwafa, and Heba Kurdi. 2021. Impact of Dataset Size on Classification Performance: An Empirical Evaluation in the Medical Domain. Applied Sciences 11, 2 (2021). https://doi.org/10.3390/app11020796Google ScholarGoogle Scholar
  3. McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 249--260. https://doi.org/10.1145/3442188.3445888Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4:1-4:15. https://doi.org/10.1147/JRD.2019.2942287Google ScholarGoogle ScholarCross RefCross Ref
  5. Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. 2020. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSR-TR-2020-32. Microsoft. https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/Google ScholarGoogle Scholar
  6. Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, New York, NY, USA, 77--91. http://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle ScholarGoogle Scholar
  7. Simon Caton and C. Haas. 2020. Fairness in Machine Learning: A Survey. ArXiv abs/2010.04053 (2020).Google ScholarGoogle Scholar
  8. European Comission. 2021. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legilsative acts. Retrieved May 23, 2021 from https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down- harmonised- rules- artificial- intelligence- artificial- intelligenceGoogle ScholarGoogle Scholar
  9. Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R. Varshney, Skyler Speakman, Zairah Mustahsan, and Supriyo Chakraborty. 2019. Fair Transfer Learning with Missing Protected Attributes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES '19). Association for Computing Machinery, New York, NY, USA, 91--98. https://doi.org/10.1145/3306618.3314236Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. effrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved May 25, 2021 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08GGoogle ScholarGoogle Scholar
  11. Kadija Ferryman and Mikaela Pitcan. 2018. Fairness in Precision Medicine. Retrieved May 23, 2021 from https://datasociety.net/library/fairness-in-precision-medicine/Google ScholarGoogle Scholar
  12. Maya Gupta, Andrew Cotter, Mahdi Milani Fard, and Serena Wang. 2018. Proxy Fairness. arXiv e-prints, Article arXiv:1806.11212 (June 2018), arXiv:1806.11212 pages. arXiv:1806.11212 [cs.LG]Google ScholarGoogle Scholar
  13. Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 1929--1938. http://proceedings.mlr.press/v80/hashimoto18a.htmlGoogle ScholarGoogle Scholar
  14. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (May 2019). https://doi.org/10.1145/3290605.3300830Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Surya Mattu Julia Angwin, Jeff Larson and Lauren Kirchner. 2016. Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. Retrieved May 23, 2021 from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle ScholarGoogle Scholar
  16. Jon Kleinberg. 2018. Inherent Trade-Offs in Algorithmic Fairness. SIGMETRICS Perform. Eval. Rev. 46, 1 (June 2018), 40. https://doi.org/10.1145/3292040.3219634Google ScholarGoogle Scholar
  17. Ronny Kohavi and Barry Becker. 1996. UCI Machine Learning Repository. http://archive.ics.uci.edu/mlGoogle ScholarGoogle Scholar
  18. Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. 2020. Fairness without Demographics through Adversarially Reweighted Learning. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 728--740. https://proceedings.neurips.cc/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Paper.pdfGoogle ScholarGoogle Scholar
  19. F. Locatello, G. Abbati, T. Rainforth, S. Bauer, B. Schölkopf, and O. Bachem. 2019. On the Fairness of Disentangled Representations. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 14584--14597. https://papers.nips.cc/paper/9603-on-the-fairness-of-disentangled-representationsGoogle ScholarGoogle Scholar
  20. Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. 2017. dSprites: Disentanglement testing Sprites dataset. https://github.com/deepmind/dsprites-dataset/.Google ScholarGoogle Scholar
  21. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. arXiv e-prints, Article arXiv:1908.09635 (Aug. 2019), arXiv:1908.09635 pages. arXiv:1908.09635 [cs.LG]Google ScholarGoogle Scholar
  22. Yannick Meneceur. 2021. Non-exhaustive list of Digital (AI/Algorithms/Big Data/Data Science/Robotics) Policies Frameworks. Retrieved May 25, 2021 from https://docs.google.com/spreadsheets/d/1mU2brATV_fgd5MRGfT2ASOFepAI1pivwhGm0VCT22_U/edit#gid=0Google ScholarGoogle Scholar
  23. John Rawls. 1958. Justice as Fairness. Philosophical Review 67, 2 (1958), 164--194. https://doi.org/10.2307/2182612Google ScholarGoogle ScholarCross RefCross Ref
  24. Pedro Saleiro, Benedict Kuester, Abby Stevens, Ari Anisfeld, Loren Hinkson, Jesse London, and Rayid Ghani. 2018. Aequitas: A Bias and Fairness Audit Toolkit. arXiv preprint arXiv:1811.05577 (2018).Google ScholarGoogle Scholar
  25. Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017), 2053951717743530. https://doi.org/10.1177/2053951717743530 arXiv:https://doi.org/10.1177/2053951717743530Google ScholarGoogle ScholarCross RefCross Ref
  26. Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (Gothenburg, Sweden) (FairWare '18). Association for Computing Machinery, New York, NY, USA, 1--7. https://doi.org/10.1145/3194770.3194776Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Linda F. Wightman. 1998. LSAC National LongitudinalBar Passage Study.Google ScholarGoogle Scholar
  28. Shen Yan, Hsien-te Kao, and Emilio Ferrara. 2020. Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (Virtual Event, Ireland) (CIKM '20). Association for Computing Machinery, New York, NY, USA, 1715--1724. https://doi.org/10.1145/3340531.3411980Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Baozhen Yao and Tao Feng. 2018. Machine learning in automotive industry. Advances in Mechanical Engineering 10 (10 2018), 168781401880578. https://doi.org/10.1177/1687814018805787Google ScholarGoogle Scholar
  30. Ming Yuan, Vikas Kumar, Muhammad Aurangzeb Ahmad, and Ankur Teredesai. 2021. Assessing Fairness in Classification Parity of Machine Learning Models in Healthcare. arXiv:2102.03717 [cs.LG]Google ScholarGoogle Scholar
  31. Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. 2021. You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features. arXiv:2104.14537 [cs.LG]Google ScholarGoogle Scholar

Index Terms

  1. Assessing Algorithmic Fairness without Sensitive Information

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        GoodIT '21: Proceedings of the Conference on Information Technology for Social Good
        September 2021
        345 pages
        ISBN:9781450384780
        DOI:10.1145/3462203

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 September 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper
        • Research
        • Refereed limited

        Upcoming Conference

        GoodIT '24
      • Article Metrics

        • Downloads (Last 12 months)40
        • Downloads (Last 6 weeks)4

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader