ABSTRACT
As the prevalence of algorithmic decision-making increases, so does the study of algorithmic fairness. When this aspect is disregarded, bias and discrimination are created, reproduced or amplified. Accordingly, work has been done to harmonize definitions of fairness and categorize ways to improve it. While using demographic data about the protected group is a possible solution, in real-world applications privacy concerns as well as uncertainty about the relevant attributes make it unrealistic. Consequently, we seek in this work to provide an overview of the methods that do not require such data, to identify which areas might be under-researched and to propose research questions for the first phase of the PhD. The influence of datasets size in the discovery and mitigation of unknown biases appears to be such an area, one that we plan to explore more fully during the thesis.
- Christine Allen, Carly Ahmad, Muhammad Eckert, Juhua Hu, and Vikas Kumar. 2020. fairMLHealth: Tools and tutorials for evaluation of fairness and bias in healthcare applications of machine learning models. https://github.com/KenSciResearch/fairMLHealth.Google Scholar
- Alhanoof Althnian, Duaa AlSaeed, Heyam Al-Baity, Amani Samha, Alanoud Bin Dris, Najla Alzakari, Afnan Abou Elwafa, and Heba Kurdi. 2021. Impact of Dataset Size on Classification Performance: An Empirical Evaluation in the Medical Domain. Applied Sciences 11, 2 (2021). https://doi.org/10.3390/app11020796Google Scholar
- McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 249--260. https://doi.org/10.1145/3442188.3445888Google ScholarDigital Library
- R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4:1-4:15. https://doi.org/10.1147/JRD.2019.2942287Google ScholarCross Ref
- Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. 2020. Fairlearn: A toolkit for assessing and improving fairness in AI. Technical Report MSR-TR-2020-32. Microsoft. https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/Google Scholar
- Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, New York, NY, USA, 77--91. http://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle Scholar
- Simon Caton and C. Haas. 2020. Fairness in Machine Learning: A Survey. ArXiv abs/2010.04053 (2020).Google Scholar
- European Comission. 2021. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legilsative acts. Retrieved May 23, 2021 from https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down- harmonised- rules- artificial- intelligence- artificial- intelligenceGoogle Scholar
- Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R. Varshney, Skyler Speakman, Zairah Mustahsan, and Supriyo Chakraborty. 2019. Fair Transfer Learning with Missing Protected Attributes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES '19). Association for Computing Machinery, New York, NY, USA, 91--98. https://doi.org/10.1145/3306618.3314236Google ScholarDigital Library
- effrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved May 25, 2021 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08GGoogle Scholar
- Kadija Ferryman and Mikaela Pitcan. 2018. Fairness in Precision Medicine. Retrieved May 23, 2021 from https://datasociety.net/library/fairness-in-precision-medicine/Google Scholar
- Maya Gupta, Andrew Cotter, Mahdi Milani Fard, and Serena Wang. 2018. Proxy Fairness. arXiv e-prints, Article arXiv:1806.11212 (June 2018), arXiv:1806.11212 pages. arXiv:1806.11212 [cs.LG]Google Scholar
- Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 1929--1938. http://proceedings.mlr.press/v80/hashimoto18a.htmlGoogle Scholar
- Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (May 2019). https://doi.org/10.1145/3290605.3300830Google ScholarDigital Library
- Surya Mattu Julia Angwin, Jeff Larson and Lauren Kirchner. 2016. Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. Retrieved May 23, 2021 from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
- Jon Kleinberg. 2018. Inherent Trade-Offs in Algorithmic Fairness. SIGMETRICS Perform. Eval. Rev. 46, 1 (June 2018), 40. https://doi.org/10.1145/3292040.3219634Google Scholar
- Ronny Kohavi and Barry Becker. 1996. UCI Machine Learning Repository. http://archive.ics.uci.edu/mlGoogle Scholar
- Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. 2020. Fairness without Demographics through Adversarially Reweighted Learning. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 728--740. https://proceedings.neurips.cc/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Paper.pdfGoogle Scholar
- F. Locatello, G. Abbati, T. Rainforth, S. Bauer, B. Schölkopf, and O. Bachem. 2019. On the Fairness of Disentangled Representations. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 14584--14597. https://papers.nips.cc/paper/9603-on-the-fairness-of-disentangled-representationsGoogle Scholar
- Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. 2017. dSprites: Disentanglement testing Sprites dataset. https://github.com/deepmind/dsprites-dataset/.Google Scholar
- Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. arXiv e-prints, Article arXiv:1908.09635 (Aug. 2019), arXiv:1908.09635 pages. arXiv:1908.09635 [cs.LG]Google Scholar
- Yannick Meneceur. 2021. Non-exhaustive list of Digital (AI/Algorithms/Big Data/Data Science/Robotics) Policies Frameworks. Retrieved May 25, 2021 from https://docs.google.com/spreadsheets/d/1mU2brATV_fgd5MRGfT2ASOFepAI1pivwhGm0VCT22_U/edit#gid=0Google Scholar
- John Rawls. 1958. Justice as Fairness. Philosophical Review 67, 2 (1958), 164--194. https://doi.org/10.2307/2182612Google ScholarCross Ref
- Pedro Saleiro, Benedict Kuester, Abby Stevens, Ari Anisfeld, Loren Hinkson, Jesse London, and Rayid Ghani. 2018. Aequitas: A Bias and Fairness Audit Toolkit. arXiv preprint arXiv:1811.05577 (2018).Google Scholar
- Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017), 2053951717743530. https://doi.org/10.1177/2053951717743530 arXiv:https://doi.org/10.1177/2053951717743530Google ScholarCross Ref
- Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (Gothenburg, Sweden) (FairWare '18). Association for Computing Machinery, New York, NY, USA, 1--7. https://doi.org/10.1145/3194770.3194776Google ScholarDigital Library
- Linda F. Wightman. 1998. LSAC National LongitudinalBar Passage Study.Google Scholar
- Shen Yan, Hsien-te Kao, and Emilio Ferrara. 2020. Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (Virtual Event, Ireland) (CIKM '20). Association for Computing Machinery, New York, NY, USA, 1715--1724. https://doi.org/10.1145/3340531.3411980Google ScholarDigital Library
- Baozhen Yao and Tao Feng. 2018. Machine learning in automotive industry. Advances in Mechanical Engineering 10 (10 2018), 168781401880578. https://doi.org/10.1177/1687814018805787Google Scholar
- Ming Yuan, Vikas Kumar, Muhammad Aurangzeb Ahmad, and Ankur Teredesai. 2021. Assessing Fairness in Classification Parity of Machine Learning Models in Healthcare. arXiv:2102.03717 [cs.LG]Google Scholar
- Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. 2021. You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features. arXiv:2104.14537 [cs.LG]Google Scholar
Index Terms
- Assessing Algorithmic Fairness without Sensitive Information
Recommendations
Airtime Fairness for IEEE 802.11 Multirate Networks
Under a multi rate network scenario, the IEEE 802.11 DCF MAC fails to provide air-time fairness for all competing stations since the protocol is designed for ensuring max-min throughput fairness and the maximum achievable throughput by any station gets ...
Revisiting model fairness via adversarial examples
AbstractExisting research literally evaluates model fairness over limited observed data. In practice, however, factors such as maliciously crafted examples and naturally corrupted examples often appear in real-world data collection. This severely limits ...
Highlights- Vulnerability of Model Fairness: We are the first to study the vulnerability of model fairness from the view of adversarial examples.
- Adversarial Fairness Attacks: We formulate the adversarial fairness attack as a general optimization ...
Fairness in multi-hop wireless backhaul networks: a dynamic estimation approach
QShine '08: Proceedings of the 5th International ICST Conference on Heterogeneous Networking for Quality, Reliability, Security and RobustnessIn this work, we consider the problem of fairness for Transit Access Points (TAP) in multi-hop wireless backhaul networks. Existing approaches are not practical due to the requirement for modifications to the MAC layer or queueing operations of TAPs, or ...
Comments