Abstract
Long-term time series prediction aims to accurately forecast the future by analyzing the prevailing temporal patterns derived from historical data inputs. Commonly used techniques for long-term time series prediction using an encoder-decoder architecture incorporating a self-attention-based mechanism have achieved impressive results and remain at the forefront of the field. Furthermore, to address the quadratic time complexity associated with the self-attention mechanism, researchers have sought to exploit the long-tailed distribution of self-attention and use several approaches based on the selection idea to achieve better performance while improving model efficiency. However, these efforts have primarily focused on practical implementations rather than exploring the underlying theoretical principles and reinforcing their effectiveness. Inspired by the increasing disparity between graph nodes in graph neural networks (GNNs), we investigate the impact of the long-tailed distribution of self-attention scores on prediction accuracy in this work. We propose a novel approach to enhance the distinction of self-attention scores and achieve performance improvements. Moreover, we incorporate this approach into a state-of-the-art model and validate its effectiveness through theoretical analysis and visual verification. Our method was extensively tested on four large datasets and ultimately showed superiority over existing methods. In addition to a remarkable average reduction of 18% in MSE and 12% in MAE, our approach also reduced time consumption by 38%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ariyo, A.A., Adewumi, A.O., Ayo, C.K.: Stock price prediction using the arima model. In: 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, pp. 106–112. IEEE (2014)
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Bai, L., Yao, L., Kanhere, S.S., Yang, Z., Chu, J., Wang, X.: Passenger demand forecasting with multi-task convolutional recurrent neural networks. In: Yang, Q., Zhou, Z.-H., Gong, Z., Zhang, M.-L., Huang, S.-J. (eds.) Advances in Knowledge Discovery and Data Mining, pp. 29–42. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-16145-3_3
Cao, D., et al.: Spectral temporal graph neural network for multivariate timeseries forecasting. Adv. Neural. Inf. Process. Syst. 33, 17766–17778 (2020)
Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 (2019)
Cirstea, R.G., Guo, C., Yang, B., Kieu, T., Dong, X., Pan, S.: Triformer: Triangular, variable-specific attentions for long sequence multivariate time series forecasting–full version. arXiv preprint arXiv:2204.13767 (2022)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018)
Dong, Y., Cordonnier, J.B., Loukas, A.: Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In: International Conference on Machine Learning, pp. 2793–2803. PMLR (2021)
Dosovitskiy, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Han, K., Wang, Y., Guo, J., Tang, Y., Wu, E.: Vision gnn: An image is worth graph of nodes. arXiv preprint arXiv:2206.00272 (2022)
Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. Adv. Neural. Inf. Process. Syst. 34, 15908–15919 (2021)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kitaev, N., Kaiser, Ł., Levskaya, A.: Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020)
Li, G., Muller, M., Thabet, A., Ghanem, B.: DeepGCNs: can GCNS go as deep as CNNS? In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9267–9276 (2019)
Li, S., et al.: Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: Deepar: probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 36(3), 1181–1191 (2020)
Seo, Y., Defferrard, M., Vandergheynst, P., Bresson, X.: Structured sequence modeling with graph convolutional recurrent networks. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) Neural Information Processing. LNCS, vol. 11301, pp. 362–373. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_33
Shaw, P., Uszkoreit, J., Vaswani, A.: Self-attention with relative position representations. arXiv preprint arXiv:1803.02155 (2018)
Taylor, S.J., Letham, B.: Forecasting at scale. Am. Stat. 72(1), 37–45 (2018)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Wang, S., Li, B.Z., Khabsa, M., Fang, H., Ma, H.: Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768 (2020)
Wen, Q., et al.: Transformers in time series: a survey. arXiv preprint arXiv:2202.07125 (2022)
Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 35(12), 11106–11115 (2021). https://doi.org/10.1609/aaai.v35i12.17325
Acknowledgements
This research was sponsored by National Natural Science Foundation of China, 62272126, and the Fundamental Research Funds for the Central Universities, 3072022TS0605.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Meng, X., Li, W., Zheng, W., Zhao, Z., Feng, G., Wang, H. (2023). Make Active Attention More Active: Using Lipschitz Regularity to Improve Long Sequence Time-Series Forecasting. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science, vol 14087. Springer, Singapore. https://doi.org/10.1007/978-981-99-4742-3_13
Download citation
DOI: https://doi.org/10.1007/978-981-99-4742-3_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-4741-6
Online ISBN: 978-981-99-4742-3
eBook Packages: Computer ScienceComputer Science (R0)