Abstract
We address the problem of event prediction which aims to predict next probable event given a sequence of previous historical events. Event prediction is meaningful and important for the government, agencies and companies to take proactive actions to avoid damages. By acquiring knowledge from large-scale news series which record sequences of real-world events, we are expected to learn from the past and see into the future. Most existing works focus on predicting known events from a given candidate set, instead of devoting to more realistic unknown event prediction. In this paper, we propose a novel deep reinforced intra-attentive model, named DRAM, for unknown event prediction, by automatically generating the text description of the next probable unknown event. Specifically, DRAM designs a novel hierarchical intra-attention mechanism to take care not only the previous events but also those words describing the events. In addition, DRAM combines standard supervised word prediction and reinforcement learning in model training, allowing it to directly optimize the non-differentiable BLEU score tracking human evaluation and generate higher quality of events. Extensive experiments on real-world datasets demonstrate that our model significantly outperforms state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Allan, J., Papka, R., Lavrenko, V.: On-line new event detection and tracking. In: ACM SIGIR (1998)
Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)
Chambers, N., Jurafsky, D.: Unsupervised learning of narrative event chains. In: ACL (2008)
Dami, S., Barforoush, A.A., Shirazi, H.: News events prediction using Markov logic networks. J. Inf. Sci. 44, 91–109 (2018)
Granroth-Wilding, M., Clark, S.: What happens next? event prediction using a compositional neural network model. In: AAAI (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Hu, L., Li, J., Nie, L., Li, X., Shao, C.: What happens next? future subevent prediction using contextual hierarchical LSTM. In: AAAI (2017)
Hu, M., Peng, Y., Huang, Z., Qiu, X., Wei, F., Zhou, M.: Reinforced mnemonic reader for machine reading comprehension (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Li, Z., Ding, X., Liu, T.: Constructing narrative event evolutionary graph for script event prediction. In: IJCAI (2018)
Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: ACL (2002)
Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)
Pichotta, K., Mooney, R.J.: Learning statistical scripts with LSTM recurrent neural networks. In: AAAI (2016)
Radinsky, K., Davidovich, S., Markovitch, S.: Learning causality for news events prediction. In: WWW (2012)
Ranzato, M., Chopra, S., Auli, M., Zaremba, W.: Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 (2015)
Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: CVPR (2017)
Stolcke, A.: SRILM - an extensible language modeling toolkit. In: ICSLP (2002)
Wang, L., Yao, J., Tao, Y., Zhong, L., Liu, W., Du, Q.: A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization (2018)
Acknowledgements
This work is supported in part by National Key Research and Development Program of China under Grant 2018YFC0831500 and National Natural Science Foundation of China (No. 61806020), the Fundamental Research Funds for the Central Universities.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yu, S., Hu, L., Wu, B. (2019). DRAM: A Deep Reinforced Intra-attentive Model for Event Prediction. In: Douligeris, C., Karagiannis, D., Apostolou, D. (eds) Knowledge Science, Engineering and Management. KSEM 2019. Lecture Notes in Computer Science(), vol 11775. Springer, Cham. https://doi.org/10.1007/978-3-030-29551-6_62
Download citation
DOI: https://doi.org/10.1007/978-3-030-29551-6_62
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29550-9
Online ISBN: 978-3-030-29551-6
eBook Packages: Computer ScienceComputer Science (R0)