Abstract
In the evolving landscape of graph neural networks (GNNs), this work is focused on dealing with the inherent challenges posed by noise and adversarial interferences in network-structured data. We propose an innovative GNN model with feature fusion (FFGNN) designed to enhance the resilience and reliability of GNNs in the face of practical scenarios. FFGNN introduces a denoising module to enhance robustness and suppress excessive feature smoothing, while incorporating an attention mechanism to improve model performance. Experimental validation on benchmark datasets, including Cora, CiteSeer and PubMed, demonstrates the superiority of our algorithm framework in various scenarios. We evaluated the performance of FFGNN under different conditions, such as feature noise, adversarial attacks, and clean data, showing that the complementary denoising and attention modules significantly enhance the model’s robustness and accuracy compared to other baseline models. This work represents a paradigm shift in GNN design, offering a novel approach to graph signal denoising and ensuring stable performance across diverse applications.


Similar content being viewed by others
Data availability
All data for this study are from the laboratory and shown in the manuscript. The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request.
References
Veličković P (2023) Everything is connected: graph neural networks. Curr Opin Struct Biol 79:102538
Vatter J, Mayer R, Jacobsen H-A (2023) The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey. ACM Comput Sur 56(1):1–37
Wu L, Chen Y, Shen K, Guo X, Gao H, Li S, Pei J, Long B et al (2023) Graph neural networks for natural language processing A survey. Found Trends Mach Learn 16(2):119–328
Jin M, Koh HY, Wen Q, Zambon D, Alippi C, Webb GI, King I, Pan S (2024) A survey on graph neural networks for time series: Forecasting, classification, imputation, and anomaly detection. In: IEEE Transactions on Pattern Analysis and Machine Intelligence.
Cremer J, Medrano Sandonas L, Tkatchenko A, Clevert D-A, De Fabritiis G (2023) Equivariant graph neural networks for toxicity prediction. Chem Res Toxicol 36(10):1561–1573
Wang X, Zhang M (2022) How powerful are spectral graph neural networks. In: International Conference on Machine Learning, pp. 23341–23362 . PMLR.
Bo D, Shi C, Wang L, Liao R (2023) Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028.
You Y, Chen T, Wang Z, Shen Y (2023) Graph domain adaptation via theory-grounded spectral regularization. In: The Eleventh International Conference on Learning Representations.
Han P, Wang J, Yao D, Shang S, Zhang X (2021) A graph-based approach for trajectory similarity computation in spatial networks. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 556–564.
Zhou H, Ren D, Xia H, Fan M, Yang X, Huang H (2021) Ast-gnn: An attention-based spatio-temporal graph neural network for interaction-aware pedestrian trajectory prediction. Neurocomputing 445:298–308
Zhao K, Kang Q, Song Y, She R, Wang S, Tay WP (2024) Adversarial robustness in graph neural networks: a hamiltonian approach. In: Advances in Neural Information Processing Systems 36.
Mujkanovic F, Geisler S, Günnemann S, Bojchevski A (2022) Are defenses for graph neural networks robust? Adv Neural Inf Process Syst 35:8954–8968
Zhu D, Zhang Z, Cui P, Zhu W (2019) Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1399–1407.
Jin W, Ma Y, Liu X, Tang X, Wang S, Tang J (2020) Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 66–74.
Jiang B, Wang B, Tang J, Luo B (2021) Gecns: graph elastic convolutional networks for data representation. IEEE Trans Pattern Analy Mach Intell 44(9):4935–4947
Geisler S, Zügner D, Günnemann S (2020) Reliable graph neural networks via robust aggregation. Adv Neural Inf Process Syst 33:13272–13284
Liu X, Ding J, Jin W, Xu H, Ma Y, Liu Z, Tang J (2021) Graph neural networks with adaptive residual. Adv Neural Inf Process Syst 34:9720–9733
Tao Q, Liao J, Zhang E, Li L (2024) A dual robust graph neural network against graph adversarial attacks. Neural Netw 175:106276
Dai E, Zhao T, Zhu H, Xu J, Guo Z, Liu H, Tang J, Wang S (2024) A comprehensive survey on trustworthy graph neural networks: privacy, robustness, fairness, and explainability. Mach Intell Res 21:1011–1061
Qian S, Ying H, Hu R, Zhou J, Chen J, Chen DZ, Wu J (2023) Robust training of graph neural networks via noise governance. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 607–615.
Chen S, Eldar YC, Zhao L (2021) Graph unrolling networks: interpretable neural networks for graph signal denoising. IEEE Trans Sig Process 69:3699–3713
Ma Y, Liu X, Zhao T, Liu Y, Tang, J, Shah N (2021) A unified view on graph neural networks as graph signal denoising. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 1202–1211.
Fu G, Hou Y, Zhang J, Ma K, Kamhoua BF, Cheng J (2020) Understanding graph neural networks from graph signal denoising perspectives. arXiv preprint arXiv:2006.04386.
Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Dong H, Chen J, Feng F, He X, Bi S, Ding Z, Cui P (2021) On the equivalence of decoupled graph convolution network and label propagation. In: Proceedings of the Web Conference 2021, pp. 3651–3662.
Gasteiger J, Bojchevski A, Günnemann S (2018) Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997.
Parikh N, Boyd S et al (2014) Proximal algorithms. Found Trends Optim 1(3):127–239
Condat L (2013) A primal-dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms. J Optim Theory Appl 158(2):460–479
Chung FR (1997) Spectral graph theory, vol 92. American Mathematical Soc, Rhode Island
Brody S, Alon U, Yahav E (2021) How attentive are graph attention networks? arXiv preprint arXiv:2105.14491.
Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 29(3):93–93
Shchur O, Mumme M., Bojchevski A, Günnemann S (2018) Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868.
Velickovic P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y et al (2017) Graph attention networks. Stat 1050(20):10–48550.
Xu K, Li C, Tian Y, Sonobe T, Kawarabayashi K-i, Jegelka S (2018) Representation learning on graphs with jumping knowledge networks. In: International Conference on Machine Learning, pp. 5453–5462 . PMLR.
Zügner D, Akbarnejad A, Günnemann S (2018) Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856.
Li Y, Jin W, Xu H, Tang J (2020) Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149.
Funding
Key Research and Development Program of Xianyang. Grant No.: S2023-ZDYF-QYCX-1794.
Author information
Authors and Affiliations
Contributions
All authors contributed to the conception and design of the study. Material preparation, data collection, and analysis were performed by Yan Jin, Haoyu Shi, and Huaiye Meng. Haoyu Shi wrote the first draft of the manuscript, and all authors commented on and revised previous versions. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
There are no potential conflict of interest to declare.
Ethical approval and consent to participate
This research adheres to high ethical standards and respects the informed consent rights of all participants. The data used are sourced from publicly available repositories or with explicit authorization for private data. The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Jin, Y., Shi, H. & Meng, H. Robust graph neural networks based on feature fusion. J Supercomput 81, 406 (2025). https://doi.org/10.1007/s11227-025-06917-4
Accepted:
Published:
DOI: https://doi.org/10.1007/s11227-025-06917-4