Skip to main content

Text Understanding with a Hybrid Neural Network Based Learning

  • Conference paper
  • First Online:
Data Science (ICPCSEE 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 728))

  • 1642 Accesses

Abstract

Teaching machine to understand needs to design an algorithm for the machine to comprehend documents. As some traditional methods cannot learn the inherent characters effectively, this paper presents a new hybrid neural network model to extract sentence-level summarization from single document, and it allows us to develop an attention based deep neural network that can learn to understand documents with minimal prior knowledge. The proposed model composed of multiple processing layers can learn the representations of features. Word embedding is used to learn continuous word representations for constructing sentence as input to convolutional neural network. The recurrent neural network is also used to label the sentences from the original document, and the proposed BAM-GRU model is more efficient. Experimental results show the feasibility of the approach. Some problems and further works are also present in the end.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.nltk.org/.

References

  1. Riloff, E., Thelen, M.: A rule-based question answering system for reading comprehension tests. In: Proceedings of the Workshop on Reading Comprehension, NAACL/ANLP-2000 (2000)

    Google Scholar 

  2. Poon, H., Christensen, J., Domingos, P., Etzioni, O., Hoffmann, R., Kiddon, C., Lin, T., Ling, X., Mausam, Ritter, A.: Machine reading at the University of Washington. In: NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pp. 87–95 (2010)

    Google Scholar 

  3. Lopyrev, K.: Generating news headlines with recurrent neural networks, pp. 1–9 (2015). arXiv:1512.01712

  4. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: EMNLP 2015, pp. 379–389 (2015)

    Google Scholar 

  5. Mihalcea, R., Tarau, P.: TextRank: bringing order into texts (2004)

    Google Scholar 

  6. Filippova, K., Alfonseca, E., Colmenares, C.A., Kaiser, L., Vinyals, O.: Sentence Compression by Deletion with LSTMs. In: EMNLP, pp. 360–368 (2015)

    Google Scholar 

  7. Moritz, K., Tom, H., Kay, W., Hermann, K., Kocisky, T., Grefenstette, E.: Teaching machines to read and comprehend. In: Advances in Neural Information, pp. 1–9 (2015)

    Google Scholar 

  8. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  9. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR 2015, pp. 1–15 (2015)

    Google Scholar 

  10. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, Ł., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., Dean, J.: Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, pp. 1–23 (2016). arXiv:1609.08144v2

  11. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in National Language Processing (EMNLP 2014), pp. 1746–1751 (2014)

    Google Scholar 

  12. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, pp. 649–657. MIT Press, Cambridge (2015)

    Google Scholar 

  13. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  MATH  Google Scholar 

  14. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, pp. 1–9 (2014). arXiv:1412.3555v1

  15. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45, 2673–2681 (1997)

    Article  Google Scholar 

  16. Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM networks. In: Proceedings of the International Joint Conference on Neural Networks, vol. 4, pp. 2047–2052 (2005)

    Google Scholar 

  17. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  18. Kobayashi, H., Yatsuka, M., Taichi, N.: Summarization based on embedding distributions. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1984–1989 (2015)

    Google Scholar 

  19. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)

    Google Scholar 

  20. Cheng, J., Lapata, M.: Neural Summarization by Extracting Sentences and Words, pp. 484–494 (2016). arXiv:1603.07252

  21. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Proceedings of the Workshop on Text Summarization branches out (WAS 2004), pp. 25–26 (2004)

    Google Scholar 

  22. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: A method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting. Association for Computational Linguistics - ACL 2002, pp. 311–318 (2002)

    Google Scholar 

Download references

Acknowledgment

This work is sponsored by National Natural Science Foundation of China (Grant No.: 61272362) and National Basic Research Program of China (973 Program, Grant No.: 2013CB329606). This work is also sponsored by National Science Foundation of Hebei Province (No. F2017208012) and Key Research Project for University of Hebei Province (No. ZD2014029).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaping Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper

Gao, S., Zhang, H., Gao, K. (2017). Text Understanding with a Hybrid Neural Network Based Learning. In: Zou, B., Han, Q., Sun, G., Jing, W., Peng, X., Lu, Z. (eds) Data Science. ICPCSEE 2017. Communications in Computer and Information Science, vol 728. Springer, Singapore. https://doi.org/10.1007/978-981-10-6388-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-6388-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-6387-9

  • Online ISBN: 978-981-10-6388-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics