skip to main content
10.1145/3485447.3511975acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes

Published: 25 April 2022 Publication History

Abstract

Recent studies have shown Graph Neural Networks (GNNs) are extremely vulnerable to attribute inference attacks. To tackle this challenge, existing privacy-preserving GNNs research assumes that the sensitive attributes of all users are known beforehand. However, due to different privacy preferences, some users (i.e., private users) may prefer not to reveal sensitive information that others (i.e., non-private users) would not mind disclosing. For example, in social networks, male users are typically less sensitive to their age information than female users. The age disclosure of male users can lead to the age information of female users in the network exposed. This is partly because social media users are connected, the homophily property and message-passing mechanism of GNNs can exacerbate individual privacy leakage. In this work, we study a novel and practical problem of learning privacy-preserving GNNs with partially observed sensitive attributes.
In particular, we propose a novel privacy-preserving GCN model coined DP-GCN, which effectively protects private users’ sensitive information that has been revealed by non-private users in the same network. DP-GCN consists of two modules: First, Disentangled Representation Learning Module (DRL), which disentangles the original non-sensitive attributes into sensitive and non-sensitive latent representations that are orthogonal to each other. Second, Node Classification Module (NCL), which trains the GCN to classify unlabeled nodes in the graph with non-sensitive latent representations. Experimental results on five benchmark datasets demonstrate the effectiveness of DP-GCN in preserving private users’ sensitive information while maintaining high node classification accuracy.

References

[1]
Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. 2021. Towards a Unified Framework for Fair and Stable Graph Representation Learning. arXiv preprint arXiv:2102.13186(2021).
[2]
Maren Awiszus, Hanno Ackermann, and Bodo Rosenhahn. 2019. Learning disentangled representations via independent subspaces. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 0–0.
[3]
Sheldon Jay Axler. 1997. Linear algebra done right. Vol. 2. Springer.
[4]
Chein-I Chang. 2005. Orthogonal subspace projection (OSP) revisited: A comprehensive study and analysis. IEEE transactions on geoscience and remote sensing 43, 3 (2005), 502–518.
[5]
Lu Cheng, Ruocheng Guo, and Huan Liu. 2022. Estimating Causal Effects of Multi-Aspect Online Reviews with Multi-Modal Proxies. In WSDM.
[6]
Lu Cheng, Kush R Varshney, and Huan Liu. 2021. Socially responsible ai algorithms: Issues, purposes, and challenges. JAIR (2021).
[7]
Enyan Dai and Suhang Wang. 2021. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining. 680–688.
[8]
Yuxiao Dong, Omar Lizardo, and Nitesh V Chawla. 2016. Do the young live in a “smaller world” than the old? age-specific degrees of separation in a large-scale mobile communication network. arXiv preprint arXiv:1606.07556(2016).
[9]
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. 2020. Quantifying Privacy Leakage in Graph Embedding. arXiv preprint arXiv:2010.00906(2020).
[10]
Haokun Fang and Quan Qian. 2021. Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning. Future Internet 13, 4 (2021), 94.
[11]
Neil Zhenqiang Gong and Bin Liu. 2016. You are who you know and how you behave: Attribute inference attacks via users’ social friends and behaviors. In 25th {USENIX} Security Symposium ({USENIX} Security 16). 979–995.
[12]
Neil Zhenqiang Gong and Bin Liu. 2018. Attribute inference attacks in online social networks. ACM Transactions on Privacy and Security (TOPS) 21, 1 (2018), 1–30.
[13]
William L Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 1025–1035.
[14]
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2020. Stealing Links from Graph Neural Networks. arXiv preprint arXiv:2005.02131(2020).
[15]
Xinlei He, Rui Wen, and et al. Wu. 2021. Node-Level Membership Inference Attacks Against Graph Neural Networks. arXiv preprint arXiv:2102.05429(2021).
[16]
Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, and Alexander Lerchner. 2018. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230(2018).
[17]
Hui Hu and Chao Lan. 2020. Inference attack and defense on the distributed private fair learning framework. In The AAAI Workshop on Privacy-Preserving Artificial Intelligence.
[18]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. Attriguard: A practical defense against attribute inference attacks via adversarial machine learning. In 27th {USENIX} Security Symposium ({USENIX} Security 18). 513–529.
[19]
Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 66–74.
[20]
Kareem L Jordan and Tina L Freiburger. 2015. The effect of race/ethnicity on sentencing: Examining sentence type, jail length, and prison length. Journal of Ethnicity in Criminal Justice 13, 3 (2015), 179–196.
[21]
Mika Juuti, Sebastian Szyller, Samuel Marchal, and N Asokan. 2019. PRADA: protecting against DNN model stealing attacks. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 512–527.
[22]
Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2011. What can we learn privately?SIAM J. Comput. 40, 3 (2011), 793–826.
[23]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907(2016).
[24]
Tassilo Klein and Moin Nabi. 2019. Privacy-preserving Representation Learning by Disentanglement. (2019).
[25]
Kaiyang Li, Guangchun Luo, Yang Ye, Wei Li, Shihao Ji, and Zhipeng Cai. 2020. Adversarial Privacy Preserving Graph Embedding against Inference Attack. IEEE Internet of Things Journal(2020).
[26]
Xiao Li, Chenghua Lin, Ruizhe Li, Chaozheng Wang, and Frank Guerin. 2020. Latent space factorisation and manipulation via matrix subspace projection. In International Conference on Machine Learning. PMLR, 5916–5926.
[27]
Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey Gordon, Stefanie Jegelka, and Ruslan Salakhutdinov. 2020. Graph Adversarial Networks: Protecting Information against Adversarial Attacks. arXiv preprint arXiv:2009.13504(2020).
[28]
Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey J Gordon, Stefanie Jegelka, and Ruslan Salakhutdinov. 2021. Information Obfuscation of Graph Neural Networks. In ICML. PMLR, 6600–6610.
[29]
Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology 27, 1 (2001), 415–444.
[30]
Guangxu Mei, Ziyu Guo, Shijun Liu, and Li Pan. 2019. Sgnn: A graph neural network based federated learning approach by hiding structure. In Big Data. IEEE, 2560–2568.
[31]
Sina Sajadmanesh and Daniel Gatica-Perez. 2020. When Differential Privacy Meets Graph Neural Networks. arXiv preprint arXiv:2006.05535(2020).
[32]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246(2018).
[33]
Michael Sejr Schlichtkrull, Nicola De Cao, and Ivan Titov. 2020. Interpreting graph neural networks for nlp with differentiable edge masking. arXiv preprint arXiv:2010.00577(2020).
[34]
Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. 2019. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7912–7921.
[35]
Weijing Shi and Raj Rajkumar. 2020. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1711–1719.
[36]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3–18.
[37]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th {USENIX} Security Symposium ({USENIX} Security 16). 601–618.
[38]
Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, and Hai Li. 2021. Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective. arXiv preprint arXiv:2107.01475(2021).
[39]
Shaya Wolf, Hui Hu, Rafer Cooley, and Mike Borowczak. 2021. Stealing Machine Learning Parameters via Side Channel Power Attacks. In 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 242–247.
[40]
Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, and Xing Xie. 2021. Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925(2021).
[41]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In International conference on machine learning. PMLR, 6861–6871.
[42]
Lingfei Wu, Yu Chen, and et al. Shen. 2021. Graph Neural Networks for Natural Language Processing: A Survey. arXiv preprint arXiv:2106.06090(2021).
[43]
Shiwen Wu, Fei Sun, Wentao Zhang, and Bin Cui. 2020. Graph neural networks in recommender systems: a survey. arXiv preprint arXiv:2011.02260(2020).
[44]
Depeng Xu, Shuhan Yuan, Xintao Wu, and HaiNhat Phan. 2018. DPNE: Differentially private network embedding. In PAKDD. Springer, 235–246.
[45]
Gaoming Yang, Xinxin Ye, Xianjin Fang, Rongshi Wu, and Li Wang. 2020. Associated attribute-aware differentially private data publishing via microaggregation. IEEE Access 8(2020), 79158–79168.
[46]
Yang Ye and Shihao Ji. 2019. Sparse Graph Attention Networks. arXiv preprint arXiv:1912.00552(2019).
[47]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In KDD. 974–983.
[48]
Li Zhang and Haiping Lu. 2020. A Feature-Importance-Aware and Robust Aggregator for GCN. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 1813–1822.
[49]
Wanrong Zhang, Shruti Tople, and Olga Ohrimenko. 2020. Dataset-level attribute leakage in collaborative learning. arXiv preprint arXiv:2006.07267(2020).
[50]
Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren Liu, and Enhong Chen. 2021. GraphMI: Extracting Private Graph Data from Graph Neural Networks. arXiv preprint arXiv:2106.02820(2021).
[51]
Tianxiang Zhao and Enyan et al. Dai. 2021. You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features. arXiv preprint arXiv:2104.14537(2021).
[52]
Jun Zhou, Chaochao Chen, Longfei Zheng, Xiaolin Zheng, Bingzhe Wu, Ziqi Liu, and Li Wang. 2020. Privacy-Preserving Graph Neural Network for Node Classification. arXiv preprint arXiv:2005.11903(2020).
[53]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1399–1407.

Cited By

View all
  • (2024)Privacy-Preserved Neural Graph DatabasesProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671678(1108-1118)Online publication date: 25-Aug-2024
  • (2024)User Consented Federated Recommender System Against Personalized Attribute Inference AttackProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635830(276-285)Online publication date: 4-Mar-2024
  • (2024)A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and ApplicationsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.345432836:12(7497-7515)Online publication date: 1-Dec-2024
  • Show More Cited By

Index Terms

  1. Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            WWW '22: Proceedings of the ACM Web Conference 2022
            April 2022
            3764 pages
            ISBN:9781450390965
            DOI:10.1145/3485447
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 25 April 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. Disentangled Representation Learning
            2. Graph Convolutional Network
            3. Orthogonal Subspace
            4. Privacy-Preserving
            5. Social Media

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            WWW '22
            Sponsor:
            WWW '22: The ACM Web Conference 2022
            April 25 - 29, 2022
            Virtual Event, Lyon, France

            Acceptance Rates

            Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)102
            • Downloads (Last 6 weeks)6
            Reflects downloads up to 08 Mar 2025

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)Privacy-Preserved Neural Graph DatabasesProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671678(1108-1118)Online publication date: 25-Aug-2024
            • (2024)User Consented Federated Recommender System Against Personalized Attribute Inference AttackProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635830(276-285)Online publication date: 4-Mar-2024
            • (2024)A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and ApplicationsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.345432836:12(7497-7515)Online publication date: 1-Dec-2024
            • (2024)Privacy-Enhanced Graph Neural Network for Decentralized Local GraphsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332997119(1614-1629)Online publication date: 1-Jan-2024
            • (2024)Graph neural networks: a survey on the links between privacy and securityArtificial Intelligence Review10.1007/s10462-023-10656-457:2Online publication date: 8-Feb-2024
            • (2023)Toward Secure Graph Data Collaboration in a Data-Sharing-Free Manner: A Novel Privacy-Preserving Graph Pre-training ModelSSRN Electronic Journal10.2139/ssrn.4413129Online publication date: 2023
            • (2023)Unveiling the Role of Message Passing in Dual-Privacy Preservation on GNNsProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615104(3474-3483)Online publication date: 21-Oct-2023
            • (2023)Independent Distribution Regularization for Private Graph EmbeddingProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614933(823-832)Online publication date: 21-Oct-2023
            • (2023)Differentially Private Graph Neural Networks for Whole-Graph ClassificationIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.322831545:6(7308-7318)Online publication date: 1-Jun-2023
            • (2022)TP-NET: Training Privacy-Preserving Deep Neural Networks under Side-Channel Power Attacks2022 IEEE International Symposium on Smart Electronic Systems (iSES)10.1109/iSES54909.2022.00095(439-444)Online publication date: Dec-2022

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media