skip to main content
10.1145/3637528.3671960acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Your Neighbor Matters: Towards Fair Decisions Under Networked Interference

Published: 24 August 2024 Publication History

Abstract

In the era of big data, decision-making in social networks may introduce bias due to interconnected individuals. For instance, in peer-to-peer loan platforms on the Web, considering an individual's attributes along with those of their interconnected neighbors, including sensitive attributes, is vital for loan approval or rejection downstream. Unfortunately, conventional fairness approaches often assume independent individuals, overlooking the impact of one person's sensitive attribute on others' decisions. To fill this gap, we introduce "Interference-aware Fairness" (IAF) by defining two forms of discrimination as Self-Fairness (SF) and Peer-Fairness (PF), leveraging advances in interference analysis within causal inference. Specifically, SF and PF causally capture and distinguish discrimination stemming from an individual's sensitive attributes (with fixed neighbors' sensitive attributes) and from neighbors' sensitive attributes (with fixed self's sensitive attributes), separately. Hence, a network-informed decision model is fair only when SF and PF are satisfied simultaneously, as interventions in individuals' sensitive attributes or those of their peers both yield equivalent outcomes. To achieve IAF, we develop a deep doubly robust framework to estimate and regularize SF and PF metrics for decision models. Extensive experiments on synthetic and real-world datasets validate our proposed concepts and methods.

Supplemental Material

MP4 File - Video of Paper rtp1737
Let us review an illustrative example, where modeling network structure in a online P2P dataset can be regarded as a double-edged sword. On the one hand, connections among individuals boost accuracies; On the other hand, connections among individuals introduces specific unfairness. To characterize such unfairness, we propose a novel fairness metric named ?Interference-aware Fairness?, to distinguish effects from neighbors and effects from self. Moreover, we perform theoretical analysis to compare our proposed IAF metric with previous ones. Results show that classical metrics including EO, DP or casual variants of them, cannot capture unfairness caused by peer effects.

References

[1]
Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. 2021. Towards a unified framework for fair and stable graph representation learning. In Uncertainty in Artificial Intelligence. PMLR, 2114--2124.
[2]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. (2016).
[3]
Hal Berghel. 2020. A Critical Look at the 2019 College Admissions Scandal? Computer, Vol. 53, 1 (2020), 72--77.
[4]
Silvia Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 7801--7808.
[5]
Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, and Hisashi Kashima. 2021. Learning individually fair classifier with path-specific causal-effect constraint. In International conference on artificial intelligence and statistics. PMLR, 145--153.
[6]
Manvi Choudhary, Charlotte Laclau, and Christine Largeron. 2022. A survey on fairness for machine learning on graphs. arXiv preprint arXiv:2205.05396 (2022).
[7]
Enyan Dai and Suhang Wang. 2021. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 680--688.
[8]
Richard B Darlington. 1971. Another look at ?cultural fairness" 1. Journal of educational measurement, Vol. 8, 2 (1971), 71--82.
[9]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[10]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NIPS symposium on machine learning and the law, Vol. 1. Barcelona, Spain, 11.
[11]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems, Vol. 29 (2016).
[12]
Yaowei Hu, Yongkai Wu, Lu Zhang, and Xintao Wu. 2020. Fair multiple decision making through soft interventions. Advances in Neural Information Processing Systems, Vol. 33 (2020), 17965--17975.
[13]
Michael G Hudgens and M Elizabeth Halloran. 2008. Toward causal inference with interference. J. Amer. Statist. Assoc., Vol. 103, 482 (2008), 832--842.
[14]
Kosuke Imai and Zhichao Jiang. 2023. Principal fairness for human and algorithmic decision-making. Statist. Sci., Vol. 38, 2 (2023), 317--328.
[15]
Guido W Imbens and Donald B Rubin. 2010. Rubin causal model. In Microeconometrics. Springer, 229--241.
[16]
Jian Kang, Jingrui He, Ross Maciejewski, and Hanghang Tong. 2020. Inform: Individual fairness on graph mining. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 379--389.
[17]
Ahmad Khajehnejad, Moein Khajehnejad, Mahmoudreza Babaei, Krishna P Gummadi, Adrian Weller, and Baharan Mirzasoleiman. 2022. Crosswalk: Fairness-enhanced node representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 11963--11970.
[18]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[19]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. Advances in neural information processing systems, Vol. 30 (2017).
[20]
Yanying Li, Yue Ning, Rong Liu, Ying Wu, and Wendy Hui Wang. 2020. Fairness of classification using users' social relationships in online peer-to-peer lending. In Companion Proceedings of the Web Conference 2020. 733--742.
[21]
Jing Ma, Ruocheng Guo, Mengting Wan, Longqi Yang, Aidong Zhang, and Jundong Li. 2022. Learning fair node representations with graph counterfactual fairness. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 695--703.
[22]
Yunpu Ma and Volker Tresp. 2021. Causal inference under networked interference and intervention policy enhancement. In International Conference on Artificial Intelligence and Statistics. PMLR, 3700--3708.
[23]
Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2020. Survey on causal-based machine learning fairness notions. arXiv preprint arXiv:2010.09553 (2020).
[24]
Alan Mishler, Edward H Kennedy, and Alexandra Chouldechova. 2021. Fairness in risk assessment instruments: Post-processing to achieve counterfactual equalized odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 386--400.
[25]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2021. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, Vol. 8 (2021), 141--163.
[26]
Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[27]
Elizabeth L Ogburn, Oleg Sofrygin, Ivan Diaz, and Mark J Van der Laan. 2022. Causal inference for social network data. J. Amer. Statist. Assoc. (2022), 1--15.
[28]
John Palowitch and Bryan Perozzi. 2019. Monet: Debiasing graph embeddings via the metadata-orthogonal training unit. arXiv preprint arXiv:1909.11793 (2019).
[29]
Judea Pearl. 2009. Causality. Cambridge university press.
[30]
Donald B Rubin. 2005. Causal inference using potential outcomes: Design, modeling, decisions. J. Amer. Statist. Assoc., Vol. 100, 469 (2005), 322--331.
[31]
Akrati Saxena, George Fletcher, and Mykola Pechenizkiy. 2022. Fairsna: Algorithmic fairness in social network analysis. arXiv preprint arXiv:2209.01678 (2022).
[32]
Claudia Shi, David Blei, and Victor Veitch. 2019. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, Vol. 32 (2019).
[33]
Elizabeth A Stuart. 2010. Matching methods for causal inference: A review and a look forward. Statistical science: a review journal of the Institute of Mathematical Statistics, Vol. 25, 1 (2010), 1.
[34]
Mark J Van der Laan. 2014. Causal inference for a population of causally connected units. Journal of Causal Inference, Vol. 2, 1 (2014), 13--74.
[35]
Petar Velivcković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[36]
Yongkai Wu, Lu Zhang, and Xintao Wu. 2018. On discrimination discovery and removal in ranked data using causal graph. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2536--2544.
[37]
Yongkai Wu, Lu Zhang, and Xintao Wu. 2019. Counterfactual fairness: Unidentification, bound and algorithm. In Proceedings of the twenty-eighth international joint conference on Artificial Intelligence.
[38]
Yongkai Wu, Lu Zhang, and Xintao Wu. 2019. On convexity and bounds of fairness-aware classification. In The World Wide Web Conference. 3356--3362.
[39]
Yongkai Wu, Lu Zhang, Xintao Wu, and Hanghang Tong. 2019. Pc-fairness: A unified framework for measuring causality-based fairness. Advances in neural information processing systems, Vol. 32 (2019).
[40]
Depeng Xu, Yongkai Wu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2019. Achieving causal fairness through generative adversarial networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.
[41]
Yuan Yuan, Kristen Altenburger, and Farshad Kooti. 2021. Causal network motifs: identifying heterogeneous spillover effects in A/B tests. In Proceedings of the Web Conference 2021. 3359--3370.
[42]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2016. A causal framework for discovering and removing direct and indirect discrimination. arXiv preprint arXiv:1611.07509 (2016).
[43]
Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, and Mingming Gong. 2022. Counterfactual fairness with partially known causal graph. Advances in Neural Information Processing Systems, Vol. 35 (2022), 1238--1252.

Index Terms

  1. Your Neighbor Matters: Towards Fair Decisions Under Networked Interference
                      Index terms have been assigned to the content through auto-classification.

                      Recommendations

                      Comments

                      Information & Contributors

                      Information

                      Published In

                      cover image ACM Conferences
                      KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
                      August 2024
                      6901 pages
                      ISBN:9798400704901
                      DOI:10.1145/3637528
                      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                      Sponsors

                      Publisher

                      Association for Computing Machinery

                      New York, NY, United States

                      Publication History

                      Published: 24 August 2024

                      Permissions

                      Request permissions for this article.

                      Check for updates

                      Author Tags

                      1. algorithmic fairness
                      2. machine learning
                      3. social network

                      Qualifiers

                      • Research-article

                      Funding Sources

                      Conference

                      KDD '24
                      Sponsor:

                      Acceptance Rates

                      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

                      Upcoming Conference

                      KDD '25

                      Contributors

                      Other Metrics

                      Bibliometrics & Citations

                      Bibliometrics

                      Article Metrics

                      • 0
                        Total Citations
                      • 224
                        Total Downloads
                      • Downloads (Last 12 months)224
                      • Downloads (Last 6 weeks)30
                      Reflects downloads up to 17 Feb 2025

                      Other Metrics

                      Citations

                      View Options

                      Login options

                      View options

                      PDF

                      View or Download as a PDF file.

                      PDF

                      eReader

                      View online with eReader.

                      eReader

                      Figures

                      Tables

                      Media

                      Share

                      Share

                      Share this Publication link

                      Share on social media