ABSTRACT
Auditing for fairness often requires relying on a secondary source, e.g., Census data, to inform about protected attributes. To avoid making assumptions about an overarching model that ties such information to the primary data source, a recent line of work has suggested finding the entire range of possible fairness valuations consistent with both sources. Though attractive, the current form of this methodology relies on rigid analytical expressions and lacks the ability to handle continuous decisions, e.g., metrics of urban services. We show that, in such settings, directly adapting these expressions can lead to loose and even vacuous results, particularly on just how fair the audited decisions may be. If used, the audit would be perceived more optimistically than it ought to be. We propose a linear programming formulation to handle continuous decisions, by finding the empirical fairness range when statistical parity is measured through the Kolmogorov-Smirnov distance. The size of this problem is linear in the number of data points and efficiently solvable. We analyze this approach and give finite-sample guarantees to the resulting fairness valuation. We then apply it to synthetic data and to 311 Chicago City Services data, and demonstrate its ability to reveal small but detectable bounds on fairness.
Supplemental Material
Available for Download
- Martha C Andrews, K Michele Kacmar, Gerald L Blakely, and Neil S Bucklew. 2008. Group cohesion as an enhancement to the justice—affective commitment relationship. Group & Organization Management 33, 6 (2008), 736–755.Google ScholarCross Ref
- Census Bureau Data. 2020. Census Bureau Data. https://data.census.gov/.Google Scholar
- Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell. 2019. Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the conference on fairness, accountability, and transparency. 339–348.Google ScholarDigital Library
- Chicago Data Portal. 2020. Chicago 311 Service Requests. https://data.cityofchicago.org/Service-Requests/311-Service-Requests/v6vf-nfxy.Google Scholar
- Nicola De Luca. 2009. Unequal Treatment and Shareholders’ Welfare Growth: Fairness v. Precise Equality. Del. J. Corp. L. 34 (2009), 853.Google Scholar
- Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016).Google Scholar
- Nathan Kallus, Xiaojie Mao, and Angela Zhou. 2022. Assessing algorithmic fairness with unobserved protected class using data combination. Management Science 68, 3 (2022), 1959–1981.Google ScholarDigital Library
- Vinesh Kannan, Matthew A Shapiro, and Mustafa Bilgic. 2019. Hindsight analysis of the Chicago food inspection forecasting model. arXiv preprint arXiv:1910.04906 (2019).Google Scholar
- Patrick J Kaufmann and Louis W Stern. 1988. Relational exchange norms, perceptions of unfairness, and retained hostility in commercial litigation. Journal of conflict resolution 32, 3 (1988), 534–552.Google ScholarCross Ref
- Maurizio Lenzerini. 2002. Data integration: A theoretical perspective. In Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. 233–246.Google ScholarDigital Library
- Tong Meng, Xuyang Jing, Zheng Yan, and Witold Pedrycz. 2020. A survey on machine learning for data fusion. Information Fusion 57 (2020), 115–129.Google ScholarDigital Library
- Hussein Mozannar, Mesrob Ohannessian, and Nathan Srebro. 2020. Fair learning with private demographic data. In International Conference on Machine Learning. PMLR, 7066–7075.Google Scholar
- T Schenk Jr, G Leynes, A Solanki, S Collins, G Smart, B Albright, and D Crippin. 2015. Forecasting restaurants with critical violations in Chicago. Technical Report. Technical report.Google Scholar
- Shubham Singh, Bhuvni Shah, Chris Kanich, and Ian A Kash. 2022. Fair decision-making for food inspections. In Equity and Access in Algorithms, Mechanisms, and Optimization. 1–11.Google Scholar
- Tom R Tyler. 2001. Public trust and confidence in legal authorities: What do majority and minority group members want from the law and legal institutions?Behavioral sciences & the law 19, 2 (2001), 215–235.Google Scholar
- Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, and Michael Jordan. 2020. Robust optimization for fairness with noisy protected groups. Advances in neural information processing systems 33 (2020), 5190–5203.Google Scholar
- Larry Wasserman. 2004. All of statistics: a concise course in statistical inference. Vol. 26. Springer.Google ScholarDigital Library
Index Terms
- Fairness Auditing in Urban Decisions using LP-based Data Combination
Recommendations
Immersive representation of urban data
SIMAUD '19: Proceedings of the Symposium on Simulation for Architecture and Urban DesignUrban environments are not comprised solely of physical objects like buildings, infrastructure, and landscapes, but also invisible, but critically influential, information like traffic patterns, economic values, and energy use. This intangible overlay ...
Urban Data Games: Creating Smart Citizens for Smart Cities
ICALT '15: Proceedings of the 2015 IEEE 15th International Conference on Advanced Learning TechnologiesA bottom-up approach to smart cities places citizens in an active role of contributing, analysing and interpreting data in pursuit of tackling local urban challenges and building a more sustainable future city. This vision can only be realised if ...
Airtime Fairness for IEEE 802.11 Multirate Networks
Under a multi rate network scenario, the IEEE 802.11 DCF MAC fails to provide air-time fairness for all competing stations since the protocol is designed for ensuring max-min throughput fairness and the maximum achievable throughput by any station gets ...
Comments