Abstract
Long-term spatio-temporal prediction (LTSTP) over different resolutions plays a crucial role in planning and dispatching smart city applications, such as smart transportation and smart grid. The Transformer, which has demonstrated superiority in capturing long-term dependencies, was recently studied for spatio-temporal prediction. However, it is difficult to leverage it using both multi-resolution knowledge and spatio-temporal dependencies to aid LTSTP. The challenge typically lies in addressing two issues: (1) efficiently fusing information across multiple resolutions that demands elaborate and complicated modifications to the model, and (2) handling the necessary long-term sequence that makes concurrent space and time attentions too costly to be performed. To address these issues, we proposed a multi-resolution recursive spatio-temporal transformer (Mu2ReST). It implements a novel multi-resolution structure with recursive prediction from coarser to finer resolutions. This proposal reveals that an arduous modification of the model is not the only way to leverage multi-resolution knowledge. It further uses a redesigned lightweight space-time attention implementation to concurrently capture spatial and temporal dependencies. Experiment results using open and commercial urban datasets demonstrate that Mu2ReST outperforms existing methods for multi-resolution LTSTP tasks.
H. Niu and C. Meng—Equal Contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The taxi zones and boroughs are according to https://data.cityofnewyork.us/Transportation/NYC-Taxi-Zones/d3c5-ddgc and https://www1.nyc.gov/assets/doh/downloads/pdf/survey/uhf_map_100604.pdf respectively.
- 3.
In fact, aggregation leads to information loss [9].
References
Chen, C.F., Fan, Q., Panda, R.: CrossViT: cross-attention multi-scale vision transformer for image classification. In: ICCV (2021)
Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv (2019)
Choromanski, K., et al.: Rethinking attention with performers. In: ICLR (2021)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. In: NeurIPS Workshop (2014)
Grigsby, J., Wang, Z., Qi, Y.: Long-range transformers for dynamic spatiotemporal forecasting. arXiv (2021)
Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yang, F.: MUSIQ: multi-scale image quality transformer. In: ICCV (2021)
Kitaev, N., Kaiser, Ł., Levskaya, A.: Reformer: the efficient transformer. In: ICLR (2020)
Lai, G., Chang, W.C., Yang, Y., Liu, H.: Modeling long-and short-term temporal patterns with deep neural networks. In: SIGIR (2018)
Lee, B.H., Park, J.: A spectral measure for the information loss of temporal aggregation. J. Stat. Theory Pract. 14, 1–23 (2020)
Li, S., et al.: Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In: NeurIPS (2019)
Parmar, N., et al.: Image transformer. In: ICML (2018)
Torres, J.F., Hadjout, D., Sebaa, A., Martínez-Álvarez, F., Troncoso, A.: Deep learning for time series forecasting: a survey. Big Data 9(1), 3–21 (2021)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)
Wolf, T., et al.: Transformers: state-of-the-art natural language processing. In: EMNLP (2020)
Wu, Z., Pan, S., Long, G., Jiang, J., Zhang, C.: Graph wavenet for deep spatial-temporal graph modeling. In: IJCAI (2019)
Xu, M., et al.: Spatial-temporal transformer networks for traffic flow forecasting. arXiv (2020)
Zheng, C., Fan, X., Wang, C., Qi, J.: GMAN: a graph multi-attention network for traffic prediction. In: AAAI (2020)
Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. In: AAAI (2021)
Acknowledgements
Chuizheng Meng is partially supported by KDDI Research, Inc. and NSF Research Grant CCF-1837131. Defu Cao is partially supported by KDDI Research, Inc. and the Annenberg Fellowship of the University of Southern California.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Niu, H. et al. (2022). Mu2ReST: Multi-resolution Recursive Spatio-Temporal Transformer for Long-Term Prediction. In: Gama, J., Li, T., Yu, Y., Chen, E., Zheng, Y., Teng, F. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2022. Lecture Notes in Computer Science(), vol 13280. Springer, Cham. https://doi.org/10.1007/978-3-031-05933-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-05933-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05932-2
Online ISBN: 978-3-031-05933-9
eBook Packages: Computer ScienceComputer Science (R0)