Abstract
As sensors are deployed widely, collected data present features of large quantity and high dimensionality, which pose enormous challenges to multivariate long sequence time-series forecasting (MLTF). Existing methods for MLTF tasks can not efficiently capture neighborhood and long-range dependencies, resulting in low prediction accuracy. In this paper, we propose a novel multivariate long sequence time-series method, called ML-Former, that captures both neighborhood and long-range dependencies to enhance the prediction capacity. Specifically, ML-Former first conducts a time-series embedding that integrates neighborhood dependencies, positions, and timestamps. Then, it captures neighborhood and long-range dependencies by using a time-series encoder-decoder. Furthermore, an innovative loss function is designed to improve the convergence of ML-Former. Experimental results on three real-world datasets show that ML-Former reduces forecasting error by up to 35.4% compared with benchmarking methods.
This paper was partially supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX), and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chicharro, D., Ledberg, A.: Framework to study dynamic dependencies in networks of interacting processes. Phys. Rev. E 86(4), 041901 (2012)
Cui, Y., Cao, K., Cao, G., Qiu, M., Wei, T.: Client scheduling and resource management for efficient training in heterogeneous IoT-edge federated learning. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. 41(8), 2407–2420 (2022)
Fine, S., Singer, Y., Tishby, N.: The hierarchical hidden Markov model: analysis and applications. Mach. Learn. 32(1), 41–62 (1998)
Guo, L., Li, R., Jiang, B.: A data-driven long time-series electrical line trip fault prediction method using an improved stacked-informer network. Sensors 21(13), 4466 (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Klimek, J., Klimek, J., Kraskiewicz, W., Topolewski, M.: Long-term series forecasting with query selector-efficient model of sparse attention. arXiv preprint arXiv:2107.08687 (2021)
Lai, G., Chang, W., Yang, Y., Liu, H.: Modeling long-and short-term temporal patterns with deep neural networks. In: Proceedings of the International ACM SIGIR Conference on Research Development in Information Retrieval, pp. 95–104 (2018)
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Lin, Z., Li, M., Zheng, Z., Cheng, Y., Yuan, C.: Self-attention ConvLSTM for spatiotemporal prediction. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11531–11538 (2020)
Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 4905–4913 (2016)
Nair, V., Hinton, E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the International Conference on Machine Learning (2010)
Oord, D., et al.: WaveNet: a generative model for raw audio. arXiv arXiv:1609.03499 (2016)
Song, C., Lin, Y., Guo, S., Wan, H.: Spatial-temporal synchronous graph convolutional networks: a new framework for spatial-temporal network data forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 914–921 (2020)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Woo, S., Park, J., Lee, J., Kweon, S.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision, pp. 3–19 (2018)
Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11106–11115 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ke, Z., Cui, Y., Li, L., Wei, T. (2022). ML-FORMER: Forecasting by Neighborhood and Long-Range Dependencies. In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning – ICANN 2022. ICANN 2022. Lecture Notes in Computer Science, vol 13531. Springer, Cham. https://doi.org/10.1007/978-3-031-15934-3_59
Download citation
DOI: https://doi.org/10.1007/978-3-031-15934-3_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15933-6
Online ISBN: 978-3-031-15934-3
eBook Packages: Computer ScienceComputer Science (R0)