Skip to main content
Log in

Multiple-model and time-sensitive dynamic active learning for recurrent graph convolutional network model extraction attacks

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

The paper explores the vulnerability of a popular deep learning model—recurrent graph convolutional network (RGCN)—from the view of model extraction attacks. As a commonly-used attack method, graph-based active learning strategies could perform black-box model extraction attacks for extracting high-fidelity deep learning models without the background knowledge of model structure and parameters. They still have two limitations—spatial-temporal information ignorance and lack of cost-effective node sampling constraints—on dynamic graphs, influencing the fidelity of extracted RGCN models. In this paper, the proposed multiple-model and time-sensitive dynamic active learning (MTDAL) strategy relied on an RGCN committee to solve the spatial-temporal information ignorance. It captures the time-sensitive dynamic node importance from dynamic node representatives and dynamic node informativeness. In the node sampling procedure, the dynamic node representative is measured by the time-sensitive and weighted distance between node embeddings and associated cluster centers achieved by semi-supervised clustering. The dynamic node informativeness is measured by the spatial-temporal disagreement of node embeddings output by the RGCN committee that includes RGCNs with multiple model structures. To overcome cost-effective node sampling difficulties, MTDAL configures the class-balance constraints and makes a trade-off between aggregated dynamic node importance and standardized nonequivalent node query cost. In the experiments, graph-based active learning strategies achieve node sampling for querying different-type oracle models and exploit labeled nodes for training multiple RGCN models in the RGCN committee. Compared with random, KcenterGreedy, and ALDG strategies, the proposed MTDAL strategy could effectively sample the most critical dynamic nodes for extracting higher-fidelity RGCN models, especially in dynamic node classification tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Availability of data and materials

DBLP3 and DBLP5, Reddit, Brain, Chickenpox, EngCovid graph datasets in the experiments are public and real datasets, and all used references are available.

Code Availability

There are available at https://github.com/blackzz133/MTDAL.

References

  1. Rozemberczki B et al (2021) Chickenpox cases in hungary: a benchmark dataset for spatiotemporal signal processing with graph neural networks. arXiv preprint arXiv:2102.08100

  2. Yao Y, Joe-Wong C (2021) Interpretable clustering on dynamic graphs with recurrent graph neural networks. In: AAAI

  3. Djenouri Y et al (2023) Hybrid graph convolution neural network and branch-and-bound optimization for traffic flow forecasting. Future Gen Comput Syst 139:100–108

    Article  Google Scholar 

  4. Djenouri Y et al (2022) Intelligent graph convolutional neural network for road crack detection. In: IEEE transactions on intelligent transportation systems

  5. Zhao L et al (2019) T-gcn: a temporal graph convolutional network for traffic prediction. IEEE Trans Intell Transp Syst 21(9):3848–3858

    Article  Google Scholar 

  6. Manessi F, Rozza A, Manzo M (2020) Dynamic graph convolutional networks. Pattern Recogn 97:107000

    Article  Google Scholar 

  7. Pareja A et al (2020) Evolvegcn: evolving graph convolutional networks for dynamic graphs. In: Proceedings of the AAAI conference on artificial intelligence, vol 34(04)

  8. Chandrasekaran V et al (2020) Exploring connections between active learning and model extraction. In: Proceedings of the 29th USENIX conference on security symposium

  9. Pal S et al (2020) Activethief: model extraction using active learning and unannotated public data. In: Proceedings of the AAAI conference on artificial intelligence, vol 34(01)

  10. Jagielski M et al (2020) High accuracy and high fidelity extraction of neural networks. In: 29th USENIX security symposium (USENIX Security 20)

  11. Dziedzic A et al (2021) Increasing the cost of model extraction with calibrated proof of work. In: International conference on learning representations

  12. Shen Y et al (2022) Model stealing attacks against inductive graph neural networks. In: SP 2022-43rd IEEE symposium on security and privacy

  13. Li H et al (2022) Black-box adversarial attack and defense on graph neural networks. In: 2022 IEEE 38th international conference on data engineering (ICDE). IEEE

  14. Wu B et al (2021) Adapting membership inference attacks to gnn for graph classification: approaches and implications. In: 2021 IEEE international conference on data mining (ICDM). IEEE

  15. Yang S et al (2022) Transferable graph backdoor attack. In: Proceedings of the 25th international symposium on research in attacks, intrusions and defenses

  16. Ma Z et al (2023) DivTheft: an ensemble model stealing attack by divide-and-conquer. In: IEEE transactions on dependable and secure computing

  17. Zhang W et al (2021) ALG: fast and accurate active learning framework for graph convolutional networks. In: Proceedings of the 2021 international conference on management of data

  18. Li Y et al (2022) Active partial label learning based on adaptive sample selection. Int J Mach Learn Cybernet 13(6):1603–1617

    Article  MathSciNet  Google Scholar 

  19. Jia X et al (2021) Graph-based reinforcement learning for active learning in real time: An application in modeling river networks. In: Proceedings of the 2021 SIAM international conference on data mining (SDM). Society for Industrial and Applied Mathematics

  20. Wu B et al (2022) Model extraction attacks on graph neural networks: taxonomy and realisation. In: Proceedings of the 2022 ACM on Asia conference on computer and communications security

  21. Tsou Y-L, Lin H-T (2019) Annotation cost-sensitive active learning by tree sampling. Mach Learn 108(5):785–807

    Article  MathSciNet  Google Scholar 

  22. Zhang Y et al (2022) Batch active learning with graph neural networks via multi-agent deep reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 36(8)

  23. Sener O, Savarese S (2017) Active learning for convolutional neural networks: a core-set approach. arXiv preprint arXiv:1708.00489

  24. Huang S-J et al (2017) Cost-effective active learning from diverse labelers. In: IJCAI

  25. Novick Y, Bar-Noy A (2022) Cost-based analyses of random neighbor and derived sampling methods. Applied Network Science 7(1):1–23

    Article  Google Scholar 

  26. Bengar JZ et al (2022) Class-balanced active learning for image classification. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision

  27. Zhu B et al (2019) Class-balanced grouping and sampling for point cloud 3d object detection. arXiv preprint arXiv:1908.09492

  28. Binu JA, Das P (2022) A multi-objective approach for inter-cluster and intra-cluster distance analysis for numeric data. Soft computing: theories and applications. Springer, Singapore, pp 319–332

    Chapter  Google Scholar 

  29. Fan Y, Yuhang Y, Carlee J-W (2021) Gcn-se: attention as explainability for node classification in dynamic graphs. In: 2021 IEEE international conference on data mining (ICDM). IEEE

  30. Li Y et al (2017) Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv preprint arXiv:1707.01926

  31. Yang G et al (2022) Attention mechanism based on temporal graph convolutional neural network for traffic flow prediction. In: Proceedings of 2021 Chinese intelligent systems conference. Springer, Singapore

  32. Mnih V, Nicolas H, Alex G (2014) Recurrent models of visual attention. Adv Neural Inf Process Syst 27

  33. Wan X et al (2022) A novel multiple temporal-spatial convolution network for anode current signals classification. Int J Mach Learn Cybernet 13(11):3299–3310

    Article  Google Scholar 

  34. Jiye L et al (2021) Graph-based semi-supervised learning via improving the quality of the graph dynamically. Mach Learn 110(6):1345–1388

    Article  MathSciNet  Google Scholar 

  35. Caramalau R, Binod B, Tae-Kyun K (2021) Sequential graph convolutional network for active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

  36. Kumar P, Gupta A (2020) Active learning query strategies for classification, regression, and clustering: a survey. J Comput Sci Technol 35(4):913–945

    Article  Google Scholar 

  37. Seung HS, Manfred O, Haim S (1992) Query by committee. In: Proceedings of the fifth annual workshop on computational learning theory

  38. Vu V-V, Labroche N, Bernadette B-M (2010) Active learning for semi-supervised k-means clustering. In: 2010 22nd IEEE international conference on tools with artificial intelligence, vol 1. IEEE

  39. Brase CH, Pellillo BC (2016) Understandable statistics: concepts and methods, enhanced. Cengage Learning

  40. Das K, Samanta S, Pal M (2018) Study on centrality measures in social networks: a survey. Soc Netw Anal Min 8(1):1–11

    Article  Google Scholar 

  41. Wu D, Lin C-T, Huang J (2019) Active learning for regression using greedy sampling. Inf Sci 474:90–105

    Article  MathSciNet  Google Scholar 

  42. Rozemberczki B et al (2021) Pytorch geometric temporal: Spatiotemporal signal processing with neural machine learning models. In: Proceedings of the 30th ACM international conference on information & knowledge management

  43. Huang K-H (2021) Deepal: deep active learning in python. arXiv preprint arXiv:2111.15258

  44. Fruchterman TMJ, Reingold EM (1991) Graph drawing by force-directed placement. Softw Pract Exp 21(11):1129–1164

    Article  Google Scholar 

  45. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  46. The dblp team: dblp computer science bibliography. Monthly snapshot release of November 2019. https://dblp.org/xml/release/dblp-2019-11-01.xml.gz

  47. UpAndRunning (2022) HCP protocols. Human Connectome Project, https://www.humanconnectome.org/hcp-protocols. Accessed 31 Dec

  48. Bai J et al (2021) A3t-gcn: attention temporal graph convolutional network for traffic forecasting. ISPRS Int J Geo-Inf 10(7):485

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by “The Chongqing Technology Innovation & Application Development Key Project (No. cstc2020jscx-dxwtBX0055)”.

Funding

The Chongqing Technology Innovation & Application Development Key Project (No. cstc2020jscx-dxwtBX0055).

Author information

Authors and Affiliations

Authors

Contributions

ZZ finished paper writing and experiments; CW guided research interests and provided the problem statement; FM improved some experiment programs; PW and HW analyzed the spatial-temporal graph data.

Corresponding author

Correspondence to Chengliang Wang.

Ethics declarations

Conflict of interest

The authors in this work declare no conflict of interest.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, Z., Wang, C., Ma, F. et al. Multiple-model and time-sensitive dynamic active learning for recurrent graph convolutional network model extraction attacks. Int. J. Mach. Learn. & Cyber. 15, 383–404 (2024). https://doi.org/10.1007/s13042-023-01916-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01916-4

Keywords

Navigation