Abstract
Many social activities can be described as asynchronous discrete event sequences, such as traffic accidents, medical care, financial transactions, social networks and violent crimes, how to predict the event occurrence probabilities, times of occurrence and types of events is a challenging and upmost important problem. It has broad application prospects in urban management, traffic optimization and other fields. Hawkes processes are used to simulate complex sequences of events. Recently, in order to expand the capacity of Hawkes process, neural Hawkes process (NHP) and transformer Hawkes process (THP) were proposed. We argue that the complexity of the model is high due to the introduction of recurrent neural networks or attention mechanisms. While the attention mechanism can achieve good performance, it is not essential. Therefore, in this paper, we propose a Two-stage Multilayer Perceptron Hawkes Process (TMPHP). The model consists of two types of multilayer perceptrons: one that applies MLPs (learning features of each event sequence to capture long-term dependencies between different events) independently for each event sequence, and one that applies MLPs to different event sequences MLP (capturing long-term and short-term dependencies between different events). Our model is simpler than other state-of-the-art models, but it has better predictive performance. Especially for MIMI data sets, our model outperforms RMTPP (4.2%), NHP (2.2%) and THP (2.2%) in terms of prediction accuracies.
This work was supported by the Science Foundation of China
University of Petroleum, Beijing (No. 2462020YXZZ023).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bacry, E., Mastromatteo, I., Muzy, J.: Hawkes processes in finance. Mark. Microstruct. Liquid. 1, 1550005 (2015)
Ogata, Y.: Space-time point-process models for earthquake occurrences. Ann. Inst. Stat. Math. 50, 379–402 (1998)
Wang, L., Zhang, W., He, X., Zha, H.: Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2447–2456 (2018)
Zhou, K., Zha, H., Song, L.: Learning social infectivity in sparse low-rank networks using multi-dimensional Hawkes processes. In: Artificial Intelligence and Statistics, pp. 641–649. PMLR (2013)
Vere-Jones, D., Daley, D.J.: An Introduction to the Theory of Point Processes. Springer Series Statistics. Springer, New York (1988). https://doi.org/10.1007/978-1-4757-2001-3
Hawkes, A.: Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 83–90 (1971)
Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks.: Studies in Computational Intelligence (2008)
Du, N., Dai, H., Trivedi, R.S., Upadhyay, U., Gomez-Rodriguez, M., Song, L.: Recurrent marked temporal point processes: embedding event history to vector. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Xiao, S., Yan, J., Yang, X., Zha, H., Chu, S.: Modeling the Intensity Function of Point Process Via Recurrent Neural Networks.: AAAI (2017)
Mei, H., Eisner, J.: The neural Hawkes process: a neurally self-modulating multivariate point process. In: NIPS (2017)
Vaswani, A., et al.: Attention is All you Need. arXiv:abs/1706.03762 (2017)
Zhang, Q., Lipani, A., Kirnap, O., Yilmaz, E.: Self-attentive Hawkes process. In: ICML, pp. 11183–11193 (2020)
Zuo, S., Jiang, H., Li, Z., Zhao, T., Zha, H.: Transformer Hawkes process. In: ICML, pp. 11692–11702 (2020)
Tolstikhin, I., et al.: MLP-mixer: an all-MLP architecture for vision. arXiv preprint arXiv:2105.01601 (2021)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog, vol. 1 (2019)
Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. In: NeurIPS (2016)
Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. In: ICLR (2021)
Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415 (2016)
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer, New York (2004)
Neumaier, A.: Introduction to Numerical Analysis (2001)
Zhao, Q., Erdogdu, M., He, H.Y., Rajaraman, A., Leskovec, J.: SEISMIC: a self-exciting point process model for predicting tweet popularity. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)
Johnson, A.E.W., et al.: MIMIC-III, a freely accessible critical care database. Sci Data. 3 (2016)
Leskovec, J., Krevl, A.: SNAP datasets: Stanford Large Network Dataset Collection (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xing, X., Liu, J.w., Cheng, Z.h. (2023). Two-Stage Multilayer Perceptron Hawkes Process. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_3
Download citation
DOI: https://doi.org/10.1007/978-981-99-1639-9_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1638-2
Online ISBN: 978-981-99-1639-9
eBook Packages: Computer ScienceComputer Science (R0)