Skip to main content

DRAM: A Deep Reinforced Intra-attentive Model for Event Prediction

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11775))

  • 2703 Accesses

Abstract

We address the problem of event prediction which aims to predict next probable event given a sequence of previous historical events. Event prediction is meaningful and important for the government, agencies and companies to take proactive actions to avoid damages. By acquiring knowledge from large-scale news series which record sequences of real-world events, we are expected to learn from the past and see into the future. Most existing works focus on predicting known events from a given candidate set, instead of devoting to more realistic unknown event prediction. In this paper, we propose a novel deep reinforced intra-attentive model, named DRAM, for unknown event prediction, by automatically generating the text description of the next probable unknown event. Specifically, DRAM designs a novel hierarchical intra-attention mechanism to take care not only the previous events but also those words describing the events. In addition, DRAM combines standard supervised word prediction and reinforcement learning in model training, allowing it to directly optimize the non-differentiable BLEU score tracking human evaluation and generate higher quality of events. Extensive experiments on real-world datasets demonstrate that our model significantly outperforms state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://pypi.org/project/jieba/.

References

  1. Allan, J., Papka, R., Lavrenko, V.: On-line new event detection and tracking. In: ACM SIGIR (1998)

    Google Scholar 

  2. Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)

    MATH  Google Scholar 

  3. Chambers, N., Jurafsky, D.: Unsupervised learning of narrative event chains. In: ACL (2008)

    Google Scholar 

  4. Dami, S., Barforoush, A.A., Shirazi, H.: News events prediction using Markov logic networks. J. Inf. Sci. 44, 91–109 (2018)

    Article  Google Scholar 

  5. Granroth-Wilding, M., Clark, S.: What happens next? event prediction using a compositional neural network model. In: AAAI (2016)

    Google Scholar 

  6. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  7. Hu, L., Li, J., Nie, L., Li, X., Shao, C.: What happens next? future subevent prediction using contextual hierarchical LSTM. In: AAAI (2017)

    Google Scholar 

  8. Hu, M., Peng, Y., Huang, Z., Qiu, X., Wei, F., Zhou, M.: Reinforced mnemonic reader for machine reading comprehension (2017)

    Google Scholar 

  9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  10. Li, Z., Ding, X., Liu, T.: Constructing narrative event evolutionary graph for script event prediction. In: IJCAI (2018)

    Google Scholar 

  11. Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: ACL (2002)

    Google Scholar 

  12. Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)

  13. Pichotta, K., Mooney, R.J.: Learning statistical scripts with LSTM recurrent neural networks. In: AAAI (2016)

    Google Scholar 

  14. Radinsky, K., Davidovich, S., Markovitch, S.: Learning causality for news events prediction. In: WWW (2012)

    Google Scholar 

  15. Ranzato, M., Chopra, S., Auli, M., Zaremba, W.: Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 (2015)

  16. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: CVPR (2017)

    Google Scholar 

  17. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: ICSLP (2002)

    Google Scholar 

  18. Wang, L., Yao, J., Tao, Y., Zhong, L., Liu, W., Du, Q.: A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization (2018)

    Google Scholar 

Download references

Acknowledgements

This work is supported in part by National Key Research and Development Program of China under Grant 2018YFC0831500 and National Natural Science Foundation of China (No. 61806020), the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linmei Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, S., Hu, L., Wu, B. (2019). DRAM: A Deep Reinforced Intra-attentive Model for Event Prediction. In: Douligeris, C., Karagiannis, D., Apostolou, D. (eds) Knowledge Science, Engineering and Management. KSEM 2019. Lecture Notes in Computer Science(), vol 11775. Springer, Cham. https://doi.org/10.1007/978-3-030-29551-6_62

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29551-6_62

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29550-9

  • Online ISBN: 978-3-030-29551-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics