Skip to main content

Advertisement

DCFA-iTimeNet: Dynamic cross-fusion attention network for interpretable time series prediction

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Although time series prediction research among engineering and technology has made breakthrough progress in performance, challenges remain in modeling complex dynamic interactions between variables and interpretability. To address these two problems, a novel two-stage strategy framework called DCFA-iTimeNet is introduced. In the first stage, this paper innovatively proposes a dynamic cross-fusion attention mechanism (DCFA) . This module facilitates the model to exchange information between different patches of the time series, thereby capturing the complex interactions between variables across time. In the second stage, we exploit a decomposition-based linear explainable Bidirectional Gated Recurrent Unit (DeLEBiGRU), which consists mainly of standard BiGRU and tensorized BiGRU. It is proposed to analyze each variable’s historical long-term, instantaneous, and future impacts. Such design is crucial for understanding how each variable impacts the overall prediction over time. Extensive experimental results demonstrate that the proposed model can effectively model and interpret complex dynamic relationships of multivariate time series and understand the model’s decision-making process. Moreover, the performance outperforms the state-of-the-art methods.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Changxia G, Ning Z, Youru L, Yan L, Huaiyu W (2023) Multi-scale adaptive attention-based time-variant neural networks for multi-step time series forecasting. Appl Intell 53(23):28974–28993

    Article  Google Scholar 

  2. Karim AA, Nower N (2024) Probabilistic spatio-temporal graph convolutional network for traffic forecasting. Appl Intell 1–16

  3. Salinas D, Flunkert V, Gasthaus J, Januschowski T (2020) Deepar: Probabilistic forecasting with autoregressive recurrent networks. Int J Forecast 36(3):1181–1191

    Article  Google Scholar 

  4. Zheng YB, Zhu GH (2019) Application of twitter time series model in influenza prediction. Chinese Preventive Medicine 20(9):793–798

    MATH  Google Scholar 

  5. Karim AA, Nower N (2024) Probabilistic spatio-temporal graph convolutional network for traffic forecasting. Applied Intelligence 1–16

  6. Li X, Shang W, Wang S (2019) Text-based crude oil price forecasting: A deep learning approach. Int J Forecast 35(4):1548–1560

    Article  MATH  Google Scholar 

  7. Hong S, Ko SJ, Woo SI et al (2024) Time-series forecasting of consolidation settlement using lstm network. Appl Intell 54:1386–1404. https://doi.org/10.1007/s10489-023-05219-7

    Article  MATH  Google Scholar 

  8. Ma S, Zhang T, Zhao YB et al (2023) Tcln: A transformer-based conv-lstm network for multivariate time series forecasting. Appl Intell 53:28401–28417

    Article  Google Scholar 

  9. Keogh E (2023) Time series data mining: A unifying view. Proceedings of the VLDB Endowment 16(12):3861–3863

    Article  MATH  Google Scholar 

  10. Wu Y, Zhao X, Li Y, Guo L, Zhu X, Fournier-Viger P, Wu X (2023) Opr-miner: Order-preserving rule mining for time series. IEEE Trans Knowl Data Eng 35(11):11722–11735

    Article  MATH  Google Scholar 

  11. Wu Y, Wang Z, Li Y, Guo Y, Jiang H, Zhu X, Wu X (2024) Co-occurrence order-preserving pattern mining with keypoint alignment for time series. ACM Trans Manag Inf Syst 15(2):1–27

    Article  MATH  Google Scholar 

  12. Yeh CCM, Kavantzas N, Keogh E (2017) Matrix profile iv: using weakly labeled time series to predict outcomes. Proceedings of the VLDB Endowment 10(12):1802–1812

    Article  MATH  Google Scholar 

  13. Galicia A, Talavera-Llames R, Troncoso A, Koprinska I, Martínez-Álvarez F (2019) Multi-step forecasting for big data time series based on ensemble learning. Knowl-Based Syst 163:830–841

    Article  Google Scholar 

  14. Zhang S, Chen Y, Zhang W, Feng R (2021) A novel ensemble deep learning model with dynamic error correction and multi-objective ensemble pruning for time series forecasting. Inf Sci 544:427–445

    Article  MathSciNet  MATH  Google Scholar 

  15. Salinas D, Bohlke-Schneider M, Callot L, Gasthaus J (2019) High-dimensional multivariate forecasting with low-rank gaussian copula processes. In: Advances in Neural Information Processing Systems

  16. Rasul K, Seward C, Schuster I, Vollgraf R (2021) Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In: International Conference on Machine Learning, pp. 8857–8868

  17. Tonekaboni S, Joshi S, Campbell K, Duvenaud DK, Goldenberg A (2020) What went wrong and when? instance-wise feature importance for time-series black-box models. Adv Neural Inf Process Syst 33:799–809

    Google Scholar 

  18. Hsieh TY, Wang S, Sun Y, Honavar V (2021) Explainable multivariate time series classification: a deep neural network which learns to attend to important variables as well as time intervals. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 607–615

  19. Turbé H, Bjelogrlic M, Lovis C, Mengaldo G (2023) Evaluation of post-hoc interpretability methods in time-series classification. Nature Machine Intelligence 5(3):250–260

    Article  MATH  Google Scholar 

  20. Kierdorf J, Roscher R (2023) Reliability scores from saliency map clusters for improved image-based harvest-readiness prediction in cauliflower. IEEE Geosci Remote Sens Lett 20:1–5

    Article  Google Scholar 

  21. Vu MN, Thai MT (2023) Limitations of perturbation-based explanation methods for temporal graph neural networks, in: 2023 IEEE International Conference on Data Mining (ICDM), pp. 618–627

  22. Sivill T, Flach P (2022) Limesegment: Meaningful, realistic time series explanations, in: International Conference on Artificial Intelligence and Statistics, pp. 3418–3433

  23. Bello M, Nápoles G, Vanhoof K, García MM, Bello R (2022) Explanation of multi-label neural networks with layer-wise relevance propagation. In: 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 01–10

  24. Ivanovs M, Kadikis R, Ozols K (2021) Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recogn Lett 150:228–234

    Article  MATH  Google Scholar 

  25. Tonekaboni S, Joshi S, Campbell K, Duvenaud DK, Goldenberg A (2020) What went wrong and when? instance-wise feature importance for time-series black-box models. Adv Neural Inf Process Syst 33:799–809

    Google Scholar 

  26. Noh K, Kim D, Byun J (2023) Explainable deep learning for supervised seismic facies classification using intrinsic method. IEEE Trans Geosci Remote Sens 61:1–11

    Article  MATH  Google Scholar 

  27. Duvnjak M, Merćep A, Kostanjčar Z (2024) Intrinsically interpretable models for credit risk assessment, in: 2024 47th MIPRO ICT and Electronics Convention (MIPRO), pp. 31–36

  28. Liu S, Zhao L, Zhao J, Li B, Wang SH (2022) Attention deficit/hyperactivity disorder classification based on deep spatio-temporal features of functional magnetic resonance imaging. Biomed Signal Process Control 71:103239

    Article  MATH  Google Scholar 

  29. Liu M, Guo C, Xu L (2024) An interpretable automated feature engineering framework for improving logistic regression. Applied Soft Computing

  30. Acharak T (2023) Interpretable decision tree ensemble learning with abstract argumentation for binary classification, Vol. 1792, pp. 85–96

  31. Mihaljević B, Bielza C, Larrañaga P (2021) Bayesian networks for interpretable machine learning and optimization. Neurocomputing 456:648–665

    Article  MATH  Google Scholar 

  32. Mariotti E, Moral JMA, Gatt A (2023) Exploring the balance between interpretability and performance with carefully designed constrainable neural additive models. Information Fusion 99:101882

    Article  Google Scholar 

  33. Yang Z, Zhang A, Sudjianto A (2021) Gami-net: An explainable neural network based on generalized additive models with structured interactions. Pattern Recogn 120:108192

    Article  MATH  Google Scholar 

  34. Boniol P, Meftah M, Remy E, Palpanas T (2022) dcam: dimension-wise class activation map for explaining multivariate data series classification. In: Proceedings of the 2022 International Conference on Management of Data, pp. 1175–1189

  35. Guo T, Lin T, Antulov-Fantulin N (2019) Exploring interpretable lstm neural networks over multi-variable data. In: International Conference on Machine Learning , pp. 2494–2504

  36. Jain S, Wallace BC (2019) Attention is not explanation. In: Proceedings of NAACL-HLT, pp. 3543–3556

  37. Chaoqun W, Yijun L, Xiangqian S, Qi W, Wang D, Huang Z (2023) Delelstm: Decomposition-based linear explainable lstm to capture instantaneous and long-term effects in time series. In: 32th International Joint Conference on Artificial Intelligence (IJCAI 2023)

  38. Bai B, Liang J, Zhang G, Li H, Bai K, Wang F (2021) Why attentions may not be interpretable?. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 25–34

  39. Gao P, Yang X, Huang K, Zhang R, Goulermas JY (2022) Explainable tensorized neural ordinary differential equations for arbitrary-step time series prediction. IEEE Transactions on Knowledge and Data Engineering

  40. Lai G, Chang WC, Yang Y, Liu H (2018) Modeling long-and short-term temporal patterns with deep neural networks, in: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 95–104

  41. Liang X, Zou T, Guo B, Li S, Zhang H, Zhang S, Huang H, Chen SX (2015) Assessing beijing’s pm2.5 pollution: Severity, weather impact, apec and winter heating, Proceedings of the Royal Society A: Mathematical. Physical and Engineering Sciences 471 (2182):20150257

  42. Zhou H, Zhang S, Peng J, Zhang S, Li J, Xiong H, Zhang W (2021) Informer: Beyond efficient transformer for long sequence time-series forecasting. In: The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Conference, Vol. 35, pp. 11106–11115

  43. Guo T, Lin T, Antulov-Fantulin N (2019) Exploring interpretable lstm neural networks over multi-variable data. In: International Conference on Machine Learning, pp. 2494–2504

  44. Pantiskas L, Verstoep K, Bal H (2020) Interpretable multivariate time series forecasting with temporal attention convolutional neural networks. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1687–1694

  45. Oreshkin BN, Carpov D, Chapados N, Bengio Y (2020) N-beats: Neural basis expansion analysis for interpretable time series forecasting. In: International Conference on Learning Representations

  46. Peng C, Li Y, Yu Y, Zhou Y, Du S (2018) Multi-step-ahead host load prediction with gru based encoder-decoder in cloud computing. In: 2018 10th International Conference on Knowledge and Smart Technology (KST), pp. 186–191

  47. Dorado Rueda F, Duran Suárez J, del Real Torres A (2021) Short-term load forecasting using encoder-decoder wavenet: Application to the french grid. Energies 14(9):2524

    Article  MATH  Google Scholar 

  48. Deng J, Chen X, Jiang R, Yin D, Yang Y, Song X, Tsang IW (2024) Disentangling structured components: Towards adaptive, interpretable and scalable time series forecasting. IEEE Transactions on Knowledge and Data Engineering

  49. Liu M, Zeng A, Chen M, Xu Z, Lai Q, Ma L, Xu Q (2022) Scinet: Time series modeling and forecasting with sample convolution and interaction. Adv Neural Inf Process Syst 35:5816–5828

    MATH  Google Scholar 

  50. Choi J, Choi H, Hwang J, Park N (2022) Graph neural controlled differential equations for traffic forecasting. Proceedings of the AAAI conference on artificial intelligence 36:6367–6374

    Article  MATH  Google Scholar 

  51. Shang C, Chen J, Bi J (2021) Discrete graph structure learning for forecasting multiple time series. In: Proceedings of the International Conference on Learning Representations

  52. Deng J, Chen X, Jiang R, Song X, Tsang IW (2021) St-norm: Spatial and temporal normalization for multi-variate time series forecasting. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pp. 269–278

  53. Deng J, Chen X, Jiang R, Song X, Tsang IW (2023) A multi-view multi-task learning framework for multi-variate time series forecasting. IEEE Trans Knowl Data Eng 35(8):7665–7680

    MATH  Google Scholar 

Download references

Funding

The work is supported by Chongqing Master’s Graduate Research and Innovation Project (No. CYS240186).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianjun Yuan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuan, J., Wu, F., Zhao, L. et al. DCFA-iTimeNet: Dynamic cross-fusion attention network for interpretable time series prediction. Appl Intell 55, 86 (2025). https://doi.org/10.1007/s10489-024-05973-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05973-2

Keywords