Skip to main content
Log in

Subgraph representation learning with self-attention and free adversarial training

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Due to its capacity to capture subgraph information within graph data, subgraph representation learning has garnered considerable attention in recent years. However, current subgraph representation learning methods neglect the relational information among the subgraphs and noise interference from the subgraph nodes. To address these issues, this paper proposes a method of subgraph representation learning with self-attention and free adversarial training (SGSFA). Specifically, this method employs a self-attention mechanism to calculate the similarity among the subgraphs and then captures the relational information among the subgraphs based on this similarity. Additionally, this method utilizes an improved free adversarial training approach to generate iterative perturbations. These iterative perturbations can eliminate the noise that hampers subgraph representation. Simultaneously, an iterative perturbation is applied to the subgraph nodes to enhance the subgraph’s node features. Extensive results on four real-world subgraph datasets demonstrate that the proposed method outperforms state-of-the-art baselines. The source code of SGSFA is publicly available at https://github.com/denggaoqin/SGSFA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

Data sharing is not applicable.

References

  1. Waikhom L, Patgiri R (2023) A survey of graph neural networks in various learning paradigms: methods, applications, and challenges. Artif Intell Rev 56(7):6295–6364

    Article  Google Scholar 

  2. Xue G, Zhong M, Qian T, Li J (2024) Psa-gnn: An augmented gnn framework with priori subgraph knowledge. Neural Networks, pp 106155

  3. Yow KS, Liao N, Luo S, Cheng R (2023) Machine learning for subgraph extraction: Methods, applications and challenges. Proc VLDB Endow 16(12):3864–3867

    Article  Google Scholar 

  4. Zeng D, Liu W, Chen W, Zhou L, Zhang M, Qu H (2023) Substructure aware graph neural networks. In: Proceedings of the AAAI conference on artificial intelligence vol 37, pp 11129–11137

  5. Zhang M, Chen Y (2018) Link prediction based on graph neural networks. Adv Neural Inf Process Sys 31

  6. You J, Xiao G, Ying R, Leskovec J (2018) Hybridgnn: Scaling deep gnns on large graphs

  7. Xu N, Wang P, Chen L, Tao J, Zhao J (2019) Mr-gnn: multi-resolution and dual graph neural network for predicting structured entity interactions. In: Proceedings of the 28th international joint conference on artificial intelligence, pp 3968–3974

  8. Liang Y, Jiang S, Gao M, Jia F, Wu Z, Lyu Z (2022) Glstm-dta: Application of prediction improvement model based on gnn and lstm. In: Journal of physics: conference series, vol 2219, IOP Publishing, pp 012008

  9. Xu X, Feng W, Jiang Y, Xie X, Sun Z, Deng Z-H(2019) Dynamically pruned message passing networks for large-scale knowledge graph reasoning. In: International conference on learning representations

  10. Ying Z, Bourgeois D, You J, Zitnik M, Leskovec J(2019) Gnnexplainer: Generating explanations for graph neural networks. Adv Neural Inf Process Sys 32

  11. Alsentzer E, Finlayson S, Li M, Zitnik M (2020) Subgraph neural networks. Adv Neural Inf Process Sys 33:8017–8029

    Google Scholar 

  12. Wang X, Zhang M (2021) Glass: Gnn with labeling tricks for subgraph representation learning

  13. Kim D, Oh A (2022) Efficient representation learning of subgraphs by subgraph-to-node translation. In: ICLR 2022 workshop on geometrical and topological representation learning

  14. Jacob SA, Louis P, Salehi-Abari A (2023) Stochastic subgraph neighborhood pooling for subgraph classification. In: Proceedings of the 32nd ACM international conference on information and knowledge management, pp 3963–3967

  15. Liu C, Yang Y, Xie Z, Lu H, Ding Y (2023) Position-aware subgraph neural networks with data-efficient learning. In: Proceedings of the sixteenth ACM international conference on web search and data mining, pp 643–651

  16. Pfeifer B, Saranti A, Holzinger A (2022) Gnn-subnet: disease subnetwork detection with explainable graph neural networks. Bioinformatics 38(Supplement_2):ii120–ii126

  17. Yan Y, Li C, Yu Y, Li X, Zhao Z (2023) Osgnn: Original graph and subgraph aggregated graph neural network. Expert Syst Appl 225:120115

    Article  Google Scholar 

  18. Xue Z, Yang Y, Marculescu R Sugar: Efficient subgraph-level training via resource-aware graph partitioning. IEEE Transactions on Computers

  19. Chen Y, Wu L, Zaki MJ (2023) Toward subgraph-guided knowledge graph question generation with graph neural networks. IEEE Transactions on Neural Networks and Learning Systems

  20. J. Li, Q. Sun, H. Peng, B. Yang, J. Wu, Phillp SY (2023) Adaptive subgraph neural network with reinforced critical structure mining. IEEE Transactions on Pattern Analysis and Machine Intelligence

  21. Tian D, Lin C, Zhou J, Duan X, Cao Y, Zhao D, Cao D (2020) Sa-yolov3: An efficient and accurate object detector using self-attention mechanism for autonomous driving. IEEE Trans Intell Trans Syst 23(5):4099–4110

    Article  Google Scholar 

  22. Zhu W, Wang Z, Wang X, Hu R, Liu H, Liu C, Wang C, Li D (2023) A dual self-attention mechanism for vehicle re-identification. Pattern Recognit 137:109258

    Article  Google Scholar 

  23. Han K, Xiao A, Wu E, Guo J, Xu C, Wang Y (2021) Transformer in transformer. Adv Neural Inf Process Sys 34:15908–15919

    Google Scholar 

  24. Shen X, Han D, Guo Z, Chen C, Hua J, Luo G (2023) Local self-attention in transformer for visual question answering. Appl Intell 53(13):16706–16723

    Article  Google Scholar 

  25. Fujita H et al (2022) Multi-task learning-based attentional feature fusion network for scene text image super-resolution. In: New trends in intelligent software methodologies, tools and techniques: proceedings of the 21st international conference on new trends in intelligent software methodologies, tools and techniques (SoMeT_22), vol. 355, IOS Press, pp 334

  26. Leng X-L, Miao X-A, Liu L (2021) Using recurrent neural network structure with enhanced multi-head self-attention for sentiment analysis. Multimed Tools Appl 80:12581–12600

    Article  Google Scholar 

  27. Wei Q, Yan Y, Zhang J, Xiao J, Wang C (2022) A self-attention-based deep reinforcement learning approach for agv dispatching systems. IEEE Transactions on Neural Networks and Learning Systems

  28. Chen Z, Xie L, Niu J, Liu X, Wei L, Tian Q (2021) Visformer: The vision-friendly transformer. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 589–598

  29. Zhang Z, Zhou F, Karimi HR, Fujita H, Hu X, Wen C, Wang T (2023) Attention gate guided multiscale recursive fusion strategy for deep neural network-based fault diagnosis. Eng Appl Artif Intell 126:107052

    Article  Google Scholar 

  30. Nassiri K, Akhloufi M (2023) Transformer models used for text-based question answering systems. Appl Intell 53(9):10602–10635

    Article  Google Scholar 

  31. Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M et al (2020) Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45

  32. Madake J, Bhatlawande S, Solanke A, Shilaskar S (2023) Perceptguide: A perception driven assistive mobility aid based on self-attention and multi-scale feature fusion. IEEE Access

  33. Tang J, Wang Z, Zhang H, Li H, Wu P, Zeng N (2023) A lightweight surface defect detection framework combined with dual-domain attention mechanism. Expert Syst Appl, pp 121726

  34. Veličkovič P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y, (2018) Graph attention networks. In: International conference on learning representations

  35. Seo S, Lee Y, Kang P (2023) Cost-free adversarial defense: Distance-based optimization for model robustness without adversarial training. Comp Vision Image Underst 227:103599

    Article  Google Scholar 

  36. Shi Y, Wu K, Han Y, Shao Y, Li B, Wu F (2023) Source-free and black-box domain adaptation via distributionally adversarial training. Pattern Recognit 143:109750

    Article  Google Scholar 

  37. Shafahi A, Najibi M, Ghiasi MA, Xu Z, Dickerson J, Studer C, Davis LS, Taylor G, Goldstein T (2019) Adversarial training for free!. Adv Neural Inf Process Sys 32

  38. Chai L, Wang Z, Chen J, Zhang G, Alsaadi FE, Alsaadi FE, Liu Q (2022) Synthetic augmentation for semantic segmentation of class imbalanced biomedical images: A data pair generative adversarial network approach. Comput Biol Med 150:105985

    Article  Google Scholar 

  39. Tian L, Wang Z, Liu W, Cheng Y, Alsaadi FE, Liu X (2021) A new gan-based approach to data augmentation and image segmentation for crack detection in thermal imaging tests. Cogn Comput 13:1263–1273

    Article  Google Scholar 

  40. Wang D, Jin W, Wu Y, Khan A (2023) Atgan: Adversarial training-based gan for improving adversarial robustness generalization on image classification. Appl Intell 53(20):24492–24508

    Article  Google Scholar 

  41. Yuan X, Zhang Z, Wang X, Wu L (2023) Semantic-aware adversarial training for reliable deep hashing retrieval. IEEE Transactions on Information Forensics and Security

  42. Zhang Z, Du X, Jin L, Wang S, Wang L, Liu X (2022) Large-scale underwater fish recognition via deep adversarial learning. Knowl Inf Syst 64(2):353–379

    Article  Google Scholar 

  43. Kong K, Li G, Ding M, Wu Z Zhu C, Ghanem B, Taylor G, Goldstein T (2022) Robust optimization as data augmentation for large-scale graphs. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 60–69

  44. Wang H, Chen T, Gui S, Hu T, Liu J, Wang Z (2020) Once-for-all adversarial training: In-situ tradeoff between robustness and accuracy for free. Adv Neural Inf Process Sys 33:7449–7461

    Google Scholar 

  45. Li X, Xiang Y, Li S (2023) Combining convolutional and vision transformer structures for sheep face recognition. Comput Electron Agric 205:107651

    Article  Google Scholar 

  46. Zhang M, Li P, Xia Y, Wang K, Jin L (2021) Labeling trick: A theory of using graph neural networks for multi-node representation learning. Adv Neural Inf Process Sys 34:9061–9073

    Google Scholar 

  47. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Sys 30

  48. Nakka KK, Salzmann M (2022) Universal, transferable adversarial perturbations for visual object trackers. In: European conference on computer vision. Springer, pp 413–429

  49. Ni J, Muhlstein L, McAuley J (2019) Modeling heart rate and activity data for personalized fitness recommendation. In: The world wide web conference, pp 1343–1353

  50. Adhikari B, Zhang Y, Ramakrishnan N, Prakash BA (2018) Sub2vec: Feature learning for subgraphs. In: Advances in knowledge discovery and data mining: 22nd pacific-asia conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part II 22, Springer, pp 170–182

  51. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks

  52. Velikovi P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2017) Graph attention networks

  53. Xu K, Hu W, Leskovec J, Jegelka S (2018) How powerful are graph neural networks

Download references

Funding

This study was funded by the Guizhou Provincial Key Technology R &D Program (QKHZC (2022) YB074) and the Guizhou Provincial Key Technology R &D Program (QKHZDZX (2022) 001).

Author information

Authors and Affiliations

Authors

Contributions

Denggao Qin: Data Curation, Methodology, Writing - OriginalDraft; Xianghong Tang: Writing - Review & Editing, Supervision; Jianguang Lu: Writing - Review & Editing, Supervision.

Corresponding author

Correspondence to Xianghong Tang.

Ethics declarations

Conflicts of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, D., Tang, X. & Lu, J. Subgraph representation learning with self-attention and free adversarial training. Appl Intell 54, 7012–7029 (2024). https://doi.org/10.1007/s10489-024-05542-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05542-7

Keywords