Skip to main content

Two-Stage Multilayer Perceptron Hawkes Process

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Abstract

Many social activities can be described as asynchronous discrete event sequences, such as traffic accidents, medical care, financial transactions, social networks and violent crimes, how to predict the event occurrence probabilities, times of occurrence and types of events is a challenging and upmost important problem. It has broad application prospects in urban management, traffic optimization and other fields. Hawkes processes are used to simulate complex sequences of events. Recently, in order to expand the capacity of Hawkes process, neural Hawkes process (NHP) and transformer Hawkes process (THP) were proposed. We argue that the complexity of the model is high due to the introduction of recurrent neural networks or attention mechanisms. While the attention mechanism can achieve good performance, it is not essential. Therefore, in this paper, we propose a Two-stage Multilayer Perceptron Hawkes Process (TMPHP). The model consists of two types of multilayer perceptrons: one that applies MLPs (learning features of each event sequence to capture long-term dependencies between different events) independently for each event sequence, and one that applies MLPs to different event sequences MLP (capturing long-term and short-term dependencies between different events). Our model is simpler than other state-of-the-art models, but it has better predictive performance. Especially for MIMI data sets, our model outperforms RMTPP (4.2%), NHP (2.2%) and THP (2.2%) in terms of prediction accuracies.

This work was supported by the Science Foundation of China

University of Petroleum, Beijing (No. 2462020YXZZ023).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bacry, E., Mastromatteo, I., Muzy, J.: Hawkes processes in finance. Mark. Microstruct. Liquid. 1, 1550005 (2015)

    Article  Google Scholar 

  2. Ogata, Y.: Space-time point-process models for earthquake occurrences. Ann. Inst. Stat. Math. 50, 379–402 (1998)

    Article  MATH  Google Scholar 

  3. Wang, L., Zhang, W., He, X., Zha, H.: Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2447–2456 (2018)

    Google Scholar 

  4. Zhou, K., Zha, H., Song, L.: Learning social infectivity in sparse low-rank networks using multi-dimensional Hawkes processes. In: Artificial Intelligence and Statistics, pp. 641–649. PMLR (2013)

    Google Scholar 

  5. Vere-Jones, D., Daley, D.J.: An Introduction to the Theory of Point Processes. Springer Series Statistics. Springer, New York (1988). https://doi.org/10.1007/978-1-4757-2001-3

  6. Hawkes, A.: Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 83–90 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  7. Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks.: Studies in Computational Intelligence (2008)

    Google Scholar 

  8. Du, N., Dai, H., Trivedi, R.S., Upadhyay, U., Gomez-Rodriguez, M., Song, L.: Recurrent marked temporal point processes: embedding event history to vector. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  10. Xiao, S., Yan, J., Yang, X., Zha, H., Chu, S.: Modeling the Intensity Function of Point Process Via Recurrent Neural Networks.: AAAI (2017)

    Google Scholar 

  11. Mei, H., Eisner, J.: The neural Hawkes process: a neurally self-modulating multivariate point process. In: NIPS (2017)

    Google Scholar 

  12. Vaswani, A., et al.: Attention is All you Need. arXiv:abs/1706.03762 (2017)

  13. Zhang, Q., Lipani, A., Kirnap, O., Yilmaz, E.: Self-attentive Hawkes process. In: ICML, pp. 11183–11193 (2020)

    Google Scholar 

  14. Zuo, S., Jiang, H., Li, Z., Zhao, T., Zha, H.: Transformer Hawkes process. In: ICML, pp. 11692–11702 (2020)

    Google Scholar 

  15. Tolstikhin, I., et al.: MLP-mixer: an all-MLP architecture for vision. arXiv preprint arXiv:2105.01601 (2021)

  16. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805

  17. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog, vol. 1 (2019)

    Google Scholar 

  18. Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. In: NeurIPS (2016)

    Google Scholar 

  19. Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  20. Hendrycks, D., Gimpel, K.: Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415 (2016)

  21. Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  23. Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer, New York (2004)

    Google Scholar 

  24. Neumaier, A.: Introduction to Numerical Analysis (2001)

    Google Scholar 

  25. Zhao, Q., Erdogdu, M., He, H.Y., Rajaraman, A., Leskovec, J.: SEISMIC: a self-exciting point process model for predicting tweet popularity. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)

    Google Scholar 

  26. Johnson, A.E.W., et al.: MIMIC-III, a freely accessible critical care database. Sci Data. 3 (2016)

    Google Scholar 

  27. Leskovec, J., Krevl, A.: SNAP datasets: Stanford Large Network Dataset Collection (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian- wei Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xing, X., Liu, J.w., Cheng, Z.h. (2023). Two-Stage Multilayer Perceptron Hawkes Process. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-1639-9_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-1638-2

  • Online ISBN: 978-981-99-1639-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics