Abstract
The Temporal Point Process (TPP) is applicable in various fields including healthcare, device failure prediction, and social media. It allows for the precise modeling of event occurrences and their associated types, as well as timestamps. Although recent studies have integrated deep learning and reinforcement learning techniques into TPP, most of them focus only on the event sequence without incorporating other fundamental information such as time series information. Moreover, the majority of these studies only predict the information of the next event, which may not be sufficient for practical applications that needs predicting multi-step events. Therefore, we propose the TGPPN model, which employs a Transformer structure to address the multi-step forecasting task, and a graph neural network to handle multi-variable time series in conjunction with event sequences. Our experiments on real-world datasets demonstrate the effectiveness of our proposed model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hawkes, A.G.: Spectra of some self-exciting and mutually exciting point processes. Biometrika 58(1), 83–90 (1971)
Isham, V., Westcott, M.: A self-correcting point process. Stochast. Process. Appl. 8(3), 335–347 (1979)
Barbour, A.D., Holst, L., Janson, S.: Poisson Approximation, vol. 2. The Clarendon Press, Oxford University Press (1992)
Du, N., Dai, H., Trivedi, R., et al.: Recurrent marked temporal point processes: Embedding event history to vector. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Zuo, S., Jiang, H., Li, Z., et al.: Transformer Hawkes process. In: International Conference on Machine Learning. PMLR (2020)
Xiao, S., Yan, J., Yang, X., et al.: Modeling the intensity function of point process via recurrent neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1 (2017)
Xiao, S., Yan, J., Farajtabar, M., et al.: Learning time series associated event sequences with recurrent point process networks. IEEE Trans. Neural Netw. Learn. Syst. 30(10), 3124–3136 (2019)
Rasmussen, J.G.: Lecture notes: temporal point processes and the conditional intensity function. arXiv preprint arXiv:1806.00221 (2018)
Shchur, O., Caner TĂĽrkmen, A., Januschowski, T., et al.: Neural temporal point processes: a review. arXiv preprint arXiv:2104.03528 (2021)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Mei, H., Eisner, J.M.: The neural Hawkes process: a neurally self-modulating multivariate point process. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Ashish, V., Shazeer, N., Parmar, N., et al.: Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008 (2017)
Yeung, C.C.K., Sit, T., Fujii, K.: Transformer-based neural marked spatio temporal point process model for football match events analysis. arXiv preprint arXiv:2302.09276 (2023)
Erfanian, N., Segarra, S., de Hoop, M.: Neural multi-event forecasting on spatio-temporal point processes using probabilistically enriched transformers. arXiv preprint arXiv:2211.02922 (2022)
Wu, Z., Pan, S., Long, G., et al.: Connecting the dots: multivariate time series forecasting with graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2020)
Wang, C., Qingyun, W., Weimer, M., et al.: FLAML: a fast and lightweight AutoML library. Proc. Mach. Learn. Syst. 3, 434–447 (2021)
Leskovec, J., Krevl, A.: SNAP datasets: Stanford large network dataset collection (2014)
Zhao, Q., Erdogdu, M.A., He, H.Y., et al.: Seismic: a self-exciting point process model for predicting tweet popularity. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)
Zhang, Y., Yan, J.: Neural relation inference for multi-dimensional temporal point processes via message passing graph. In: IJCAI (2021)
Bontempi, G., Ben Taieb, S., Le Borgne, Y.-A.: Machine learning strategies for time series forecasting. In: Aufaure, M.-A., Zimányi, E. (eds.) eBISS 2012. LNBIP, vol. 138, pp. 62–77. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36318-4_3
Zhang, Q., Lipani, A., Kirnap, O., et al.: Self-attentive Hawkes process. In: International Conference on Machine Learning. PMLR (2020)
Acknowledgement
This work is partially supported by the Program of Technology Innovation of the Science and Technology Commission of Shanghai Municipality (Granted No. 21511104700 and 21DZ1205000). This work is also supported by China National Science Foundation (Granted Number 62072301).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Qi, Q., Shen, S., Cao, J., Guan, W., Gan, S. (2024). TGPPN: A Transformer and Graph Neural Network Based Point Process Network Model. In: Sun, Y., Lu, T., Wang, T., Fan, H., Liu, D., Du, B. (eds) Computer Supported Cooperative Work and Social Computing. ChineseCSCW 2023. Communications in Computer and Information Science, vol 2012. Springer, Singapore. https://doi.org/10.1007/978-981-99-9637-7_17
Download citation
DOI: https://doi.org/10.1007/978-981-99-9637-7_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-9636-0
Online ISBN: 978-981-99-9637-7
eBook Packages: Computer ScienceComputer Science (R0)