Abstract
Previous studies have shown that concurrently extracting spatial and temporal information is a better way to model spatial-temporal data. However, in these studies, the receptive field has been fixed to construct the carrier of concurrent extraction, resulting in the lack of flexibility in selecting receptive fields, and the loss of scalability for capturing long-range temporal dependencies. Furthermore, these studies will result in static weights that are insufficient to describe the complex spatial and temporal dependencies. In this paper, we propose the Concurrent Spatial-Temporal Transformer (CSTT), which ensures the denseness of the carrier of concurrent extraction, so that messages under different receptive fields can be passed more efficiently, thus making the selection of receptive field more flexible. Therefore, the scalability of capturing long-range temporal dependencies can also be promised. Additionally, a unified self-attention mechanism will be applied on the carrier of concurrent extraction to capture spatial and temporal information, so that the dependencies between both dimensions under different contextual information can be preserved. On these bases, we design an iterative strategy to further tackle long sequences. Experiments on four traffic real-world datasets illustrate that our algorithm can achieve significant improvement on the classical spatial-temporal modeling task.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Yu, B., Yin, H., Zhu, Z.: Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In: IJCAI (2018)
Li, Y., Yu, R., Shahabi, C., Liu, Y.: Diffusion convolutional recurrent neural network: data-driven traffic forecasting. In: ICLR (2018)
Ji, S., Wei, X., Yang, M., Kai, Yu.: 3D convolutional neural networks for human action recognition. T-PAMI 35, 221–231 (2013)
Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. ArXiv abs/1801.07455 (2018)
Wu, Z., Pan, S., Long, G., Jiang, J., Zhang, C.: Graph WaveNet for deep spatial-temporal graph modeling. In: IJCAI (2019)
Guo, S., Lin, Y., Feng, N., Song, C., Wan, H.: Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In: AAAI (2019)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV (2015)
Song, C., Lin, Y., Guo, S., Wan, H.: Spatial-temporal synchronous graph convolutional networks: a new framework for spatial-temporal network data forecasting. In: AAAI (2020)
Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: ICML (2017)
Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then propagate: graph neural networks meet personalized PageRank. In: ICLR (2019)
Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9 (1997)
Drucker, H., Burges, C.J., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. In: NIPS (1996)
Acknowledgements
This work is funded in part by the National Natural Science Foundation of China Projects No. U1936213. This work is also supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xie, Y. et al. (2022). Concurrent Transformer for Spatial-Temporal Graph Modeling. In: Bhattacharya, A., et al. Database Systems for Advanced Applications. DASFAA 2022. Lecture Notes in Computer Science, vol 13247. Springer, Cham. https://doi.org/10.1007/978-3-031-00129-1_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-00129-1_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00128-4
Online ISBN: 978-3-031-00129-1
eBook Packages: Computer ScienceComputer Science (R0)