Abstract
Accurately capturing inter-series and intra-series variations is crucial for multivariate long-term time-series forecasting. Existing channel-independent and channel-mixing approaches struggle with complex inter-series relationships, while RNN-based models face challenges in capturing long-term intra-series dependencies. Additionally, current decomposition methods struggle with complex trends in the series, further hindering intra-series modeling. To address these, we propose GDRNet, which consists of four components: the channel grouping block (CGB), the channel group multi-mixer block (CGMB), the time-slice dilated residual GRU (SDRGRU), and the multi-trend decomposition block (MTDB). CGB groups channels with similar distributions for inter-series learning, while CGMB captures complex dependencies between series across various granularities and perspectives. SDRGRU expands the receptive field and incorporates residual learning to capture long-term intra-series dependencies, while MTDB enhances trend-seasonal decomposition, further facilitating precise intra-series modeling. GDRNet achieving 9.12% and 22.30% improvements in multivariate and univariate forecasting tasks, respectively, showcases its effectiveness in time-series forecasting.














Similar content being viewed by others
Data availability
The datasets supporting this study are openly available. ETT: https://github.com/zhouhaoyi/ETDataset. Traffic: http://pems.dot.ca.gov. Weather: https://www.bgc-jena.mpg.de/wetter. Electricity: https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014.
References
Bai S, Kolter JZ, Koltun V (2018) An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271
Cai W, Liang Y, Liu X, et al (2024) Msgnet: Learning multi-scale inter-series correlations for multivariate time series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 11141–11149
Challu C, Olivares KG, Oreshkin BN, et al (2023) Nhits: Neural hierarchical interpolation for time series forecasting. In: Proceedings of the AAAI conference on artificial intelligence, pp 6989–6997
Chen L, Chen D, Shang Z et al (2023) Multi-scale adaptive graph neural network for multivariate time series forecasting. IEEE Trans Knowledge Data Eng 35(10):10748–10761
Chen P, Zhang Y, Cheng Y, et al (2024) Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting. arXiv preprint arXiv:2402.05956
Chung J, Gulcehre C, Cho K, et al (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555
Ekambaram V, Jati A, Nguyen N, et al (2023) Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp 459–469
Elman JL (1990) Finding structure in time. Cognit Sci 14(2):179–211
Fan W, Zheng S, Yi X, et al (2022) Depts: Deep expansion learning for periodic time series forecasting. arXiv preprint arXiv:2203.07681
Feng S, Miao C, Zhang Z, et al (2024) Latent diffusion transformer for probabilistic time series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 11979–11987
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
Huang Q, Shen L, Zhang R et al (2023) Crossgnn: confronting noisy multivariate time series via cross interaction refinement. Adv Neural Inf Process Syst 36:46885–46902
Huang Q, Shen L, Zhang R, et al (2024) Hdmixer: Hierarchical dependency with extendable patch for multivariate time series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 12608–12616
Ilbert R, Odonnat A, Feofanov V, et al (2024) Samformer: Unlocking the potential of transformers in time series forecasting with sharpness-aware minimization and channel-wise attention. In: Forty-first International Conference on Machine Learning
Karevan Z, Suykens JA (2020) Transductive lstm for time-series prediction: an application to weather forecasting. Neural Networks 125:1–9
Lin S, Lin W, Wu W, et al (2024) Sparsetsf: Modeling long-term time series forecasting with 1k parameters. arXiv preprint arXiv:2405.00946
Liu M, Zeng A, Chen M et al (2022) Scinet: time series modeling and forecasting with sample convolution and interaction. Adv Neural Inf Process Syst 35:5816–5828
Liu S, Yu H, Liao C, et al (2021) Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In: International conference on learning representations
Liu Y, Hu T, Zhang H, et al (2023) itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625
Luo D, Wang X (2024) Moderntcn: A modern pure convolution structure for general time series analysis. In: The Twelfth International Conference on Learning Representations
Ma S, Miao S, Yao S et al (2025) Sdhnet: a sampling-based dual-stream hybrid network for long-term time series forecasting. J Supercomput 81(1):1–29
Ma X, Li X, Fang L, et al (2024) U-mixer: An unet-mixer architecture with stationarity correction for time series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 14255–14262
Olivares KG, Challu C, Marcjasz G et al (2023) Neural basis expansion analysis with exogenous variables: forecasting electricity prices with nbeatsx. Int J Forecast 39(2):884–900
Oreshkin BN, Carpov D, Chapados N, et al (2019) N-beats: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437
Pekarčík P, Gajdoš A, Sokol P (2020) Forecasting security alerts based on time series. In: International Conference on Hybrid Artificial Intelligence Systems, Springer, pp 546–557
Qin Y, Song D, Chen H, et al (2017) A dual-stage attention-based recurrent neural network for time series prediction. arXiv preprint arXiv:1704.02971
Shabani A, Abdi A, Meng L, et al (2022) Scaleformer: Iterative multi-scale refining transformers for time series forecasting. arXiv preprint arXiv:2206.04038
Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is all you need. Advances in Neural Information Processing Systems
Wang H, Peng J, Huang F, et al (2023a) Micn: Multi-scale local and global context modeling for long-term series forecasting. In: The eleventh international conference on learning representations
Wang R, Miao S, Liu D et al (2023) Multi-layer seasonal perception network for time series forecasting. ICASSP 2023–2023 IEEE Int Conf Acoustics. IEEE, Speech and Signal Processing (ICASSP), pp 1–5
Wu H, Xu J, Wang J et al (2021) Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. Adv Neural Inf Process Syst 34:22419–22430
Wu H, Hu T, Liu Y, et al (2022) Timesnet: Temporal 2d-variation modeling for general time series analysis. arXiv preprint arXiv:2210.02186
Yu G, Zou J, Hu X, et al (2024) Revitalizing multivariate time series forecasting: Learnable decomposition with inter-series dependencies and intra-series variations modeling. arXiv preprint arXiv:2402.12694
Zeng A, Chen M, Zhang L, et al (2023) Are transformers effective for time series forecasting? In: Proceedings of the AAAI conference on artificial intelligence, pp 11121–11128
Zhang S, Ma X, Fang Z et al (2023) Financial time series forecasting based on momentum-driven graph signal processing. Appl Intell 53(18):20950–20966
Zhang Y, Yan J (2023) Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In: The eleventh international conference on learning representations
Zheng Y, Liu Q, Chen E, et al (2014) Time series classification using multi-channels deep convolutional neural networks. In: International conference on web-age information management, Springer, pp 298–310
Zhong S, Song S, Li G, et al (2023) A multi-scale decomposition mlp-mixer for time series analysis. arXiv preprint arXiv:2310.11959
Zhou H, Zhang S, Peng J, et al (2021) Informer: Beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the AAAI conference on artificial intelligence, pp 11106–11115
Zhou T, Ma Z, Wen Q, et al (2022) Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In: International conference on machine learning, PMLR, pp 27268–27286
Funding
This work is supported by the National Key Research and Development Program of China under Grant No. 2024YFC3014300 and the Yunnan Province Major Science and Technology Project under Grant No. 202302AD080006 and the Tsinghua University Initiative Scientifc Research Program under Grant No. 2024Z04W01001 and the Research Foundation of the Department of Natural Resources of Hunan Province under Grant No. HBZ20240123.
Author information
Authors and Affiliations
Contributions
Qingda Bao, Shengfa Miao, and Xin Jin helped in conceptualization; Qingda Bao and Ruoshu Wang helped in methodology and investigation; Qingda Bao and Yulin Tian worked in software; Qingda Bao contributed to writing—original draft preparation; Qingda Bao, Shengfa Miao, Puming Wang, and Qian Jiang contributed to writing—review and editing; Shengfa Miao helped in funding acquisition; Shengfa Miao, Shaowen Yao, and Da Hu helped in resources; and Shengfa Miao worked in supervision.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Consent for publication
The authors agree to the publication of this work and confirm that all the data used in the study are publicly accessible through the provided links.
Ethics approval
This research does not involve human participants nor animals.
Materials availability
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Bao, Q., Miao, S., Tian, Y. et al. GDRNet: a channel grouping based time-slice dilated residual network for long-term time-series forecasting. J Supercomput 81, 499 (2025). https://doi.org/10.1007/s11227-025-07011-5
Accepted:
Published:
DOI: https://doi.org/10.1007/s11227-025-07011-5