Skip to main content

Generating Financial Reports from Macro News via Multiple Edits Neural Networks

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12459))

Abstract

Automatically generating financial reports given a piece of breaking macro news is quite challenging task. Essentially, this task is a text-to-text generation problem but is to learn long text, i.e., greater than 40 words, from a piece of short macro news. Moreover, the core component for human beings to generate financial reports is the logic inference given a piece of succinct macro news. To address this issue, we propose the novel multiple edits neural networks which first learns the outline for given news and then generates financial reports from the learnt outline. Particularly, the input news is first embedded via skip-gram model and is then fed into Bi-LSTM component to train the contextual representation vector. This vector is used to learn the latent word probability distribution for the generation of financial reports. To train this end to end neural network model, we have collected one hundred thousand pairs of news-report data. Extensive experiments are performed on this collected dataset. The proposed model achieves the SOTA performance against baseline models w.r.t. the evaluation criteria BLEU, ROUGE and human scores. Although the readability of the generated reports by our approach is better than that of the rest models, it remains an open problem which needs further efforts in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We make available our dataset https://github.com/papersharing/news-and-reports-dataset.

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., Bengio, S.: Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 (2015)

  3. Cho, K., et al.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  4. Dong, L., Huang, S., Wei, F., Lapata, M., Zhou, M., Xu, K.: Learning to generate product reviews from attributes. In: Proceedings of EACL, pp. 623–632 (2017)

    Google Scholar 

  5. Dušek, O., Jurčíček, F.: Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. arXiv preprint arXiv:1606.05491 (2016)

  6. Fedus, W., Goodfellow, I., Dai, A.M.: Maskgan: better text generation via filling in the\(\_\). arXiv preprint arXiv:1801.07736 (2018)

  7. Feng, X., Liu, M., Liu, J., Qin, B., Sun, Y., Liu, T.: Topic-to-essay generation with neural networks. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 4078–4084 (2018)

    Google Scholar 

  8. Freitag, M., Al-Onaizan, Y.: Beam search strategies for neural machine translation. arXiv preprint arXiv:1702.01806 (2017)

  9. Genest, P.E., Lapalme, G.: Framework for abstractive summarization using text-to-text generation. In: Proceedings of the Workshop on Monolingual Text-to-Text Generation, pp. 64–73 (2011)

    Google Scholar 

  10. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  11. Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)

  12. Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016)

  13. Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., Wang, J.: Long text generation via adversarial training with leaked information. In: Proceedings of AAAI (2018)

    Google Scholar 

  14. Hao, Y., Liu, H., He, S., Liu, K., Zhao, J.: Pattern-revising enhanced simple question answering over knowledge bases. In: Proceedings of International Conference on Computational Linguistics, pp. 3272–3282 (2018)

    Google Scholar 

  15. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  16. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  17. Kukich, K.: Design of a knowledge-based report generator. In: Proceedings of ACL, pp. 145–150 (1983)

    Google Scholar 

  18. Kusner, M.J., Hernández-Lobato, J.M.: Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051(2016)

  19. Liao, Y., Bing, L., Li, P., Shi, S., Lam, W., Zhang, T.: Quase: sequence editing under quantifiable guidance. In: Proceedings of EMNLP, pp. 3855–3864 (2018)

    Google Scholar 

  20. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004) (2004)

    Google Scholar 

  21. Lin, K., Li, D., He, X., Zhang, Z., Sun, M.-T.: Adversarial ranking for language generation. In: Advances in Neural Information Processing Systems, pp. 3155–3165 (2017)

    Google Scholar 

  22. Lin, R., Liu, S., Yang, M., Li, M., Zhou, M., Li, S.: Hierarchical recurrent neural network for document modeling. In: Proceedings of EMNLP, pp. 899–907 (2015)

    Google Scholar 

  23. Miao, Y., Blunsom, P.: Language as a latent variable: Discrete generative models for sentence compression. arXiv preprint arXiv:1609.07317 (2016)

  24. Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: Proceedings of ICML, pp. 1727–1736 (2016)

    Google Scholar 

  25. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318 (2002)

    Google Scholar 

  26. Quirk, C., Brockett, C., Dolan, W.B.: Monolingual machine translation for paraphrase generation. In: Proceedings of EMNLP, pp. 142–149 (2004)

    Google Scholar 

  27. See, A., Liu, P.J., Manning, C.D.: Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)

  28. Semeniuta, S., Severyn, A., Barth, E.: A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390 (2017)

  29. Serban, I.V.: Multiresolution recurrent neural networks: an application to dialogue response generation. In: Proceedings of AAAI (2017)

    Google Scholar 

  30. Serban, I.V.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Proceedings of AAAI (2017)

    Google Scholar 

  31. Sundermeyer, M., Schlüter, R., Ney, H.: Lstm neural networks for language modeling. In: Thirteenth Annual Conference of the International Speech Communication Association (2012)

    Google Scholar 

  32. Sutskever, I., Martens, J., Hinton, G.E.: Generating text with recurrent neural networks. In: Proceedings of ICML, pp. 1017–1024 (2011)

    Google Scholar 

  33. Sutskever, I., Vinyals, O., Le, Q.V.E.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  34. Tran, Q.H., Zukerman, I., Haffari, G.: Inter-document contextual language model. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 762–766 (2016)

    Google Scholar 

  35. Vaswani, A.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  36. Vinyals, O., Le, Q.: A neural conversational model. arXiv preprint arXiv:1506.05869 (2015)

  37. Xing, C., et al.: Topic aware neural response generation. In: Proceedings of AAAI (2017)

    Google Scholar 

  38. Xue, Y., et al.: Multimodal recurrent model with attention for automated radiology report generation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 457–466. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_52

    Chapter  Google Scholar 

  39. Yang, P., Li, L., Luo, F., Liu, T., Sun, X.: Enhancing topic-to-essay generation with external commonsense knowledge. In: Proceedings of ACL, pp. 2002–2012 (2019)

    Google Scholar 

  40. Yu, L., Zhang, W., Wang, J., Yu, Y.: Seqgan: sequence generative adversarial nets with policy gradient. In: Proceedings of AAAI (2017)

    Google Scholar 

  41. Zhang, Y., Gan, Z., Carin, L.: Generating text via adversarial training. In: NIPS workshop on Adversarial Training, vol. 21 (2016)

    Google Scholar 

  42. Zhao, S., Niu, C., Zhou, M., Liu, T., Li, S.: Combining multiple resources to improve smt-based paraphrasing model. In: Proceedings of ACL, pp. 1021–1029 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaofeng Zhang or Yunpeng Ren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, W., Zhang, X., Ren, Y. (2021). Generating Financial Reports from Macro News via Multiple Edits Neural Networks. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12459. Springer, Cham. https://doi.org/10.1007/978-3-030-67664-3_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67664-3_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67663-6

  • Online ISBN: 978-3-030-67664-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics