Impact Statement:LSTF is a popular topic in time-series forecasting, which can predict the values of variables over a long period. It enables us to make decisions according to the forecas...Show More
Abstract:
Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence time-series forecasting (LSTF), such as predicting traffic ...Show MoreMetadata
Impact Statement:
LSTF is a popular topic in time-series forecasting, which can predict the values of variables over a long period. It enables us to make decisions according to the forecasting results, which plays a significant role in various fields. However, existing models struggle with information coupling and intertwined temporal patterns, which might undermine prediction accuracy. This article proposes a new end-to-end model, PCDformer, which simultaneously considers the above problems and provides a new idea to improve the accuracy of time-series forecasting. PCDformer provides a separate convolutional layer for each variable to learn temporal dependence, avoiding the impact of information coupling. Besides, PCDformer deploys series decomposition and sparse self-attention to the canonical Transformer to disentangle series and extract the correlation between variables. The experimental results illustrate the superiority of the proposed model compared with the state-of-the-art methods covering traf...
Abstract:
Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence time-series forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time-series prediction models encounter certain limitations. First, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Second, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel end-to-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct tempora...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 10, October 2024)