Skip to main content

Text Summarization as the Potential Technology for Intelligent Internet of Things

  • Conference paper
  • First Online:
Smart Grid and Internet of Things (SGIoT 2020)

Abstract

Applying automatic text summarization technology to Internet of Things can save network cost and improve computing speed. However, the models of generated text summarization are always using sequence-to-sequence model with attention mechanism. Unluckily, this method for abstractive summarization has two main shortcomings: first, they fail to address unknown words problems, second, their generated summaries are not very readable because of repetition. In our work, our goal is to enhance the semantic coherence of summaries for original texts. In order to this end, we propose a new model that augments the traditional model in two ways. First, we apply semantic relevance to pointer-generator network to get high similarity between source texts and our summaries. Second, we change the mechanism of coverage and use it to pointer-generator network to discourage repetition. Following other works, we apply our new model to Chinese social media dataset LCSTS. Our experiments suggested that our new model outperforms current abstractive baseline systems on the dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dragomir, R., Timothy, A., Sasha, B.: MEAD - a platform for multidocument multilingual text summarization. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation, LREC (2004)

    Google Scholar 

  2. Kristian, W., Mirella, L.: Automatic generation of story highlights. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 565–574 (2010)

    Google Scholar 

  3. Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, vol. 1: Long Papers (2016)

    Google Scholar 

  4. Dzmitry, B., Kyunghyun, C., Yoshua, B.: Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473 (2014)

    Google Scholar 

  5. Alexander, M., Sumit, C., Jason, W.: A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 379–389 (2015)

    Google Scholar 

  6. Ma, S., Sun, X., Lin, J., Wang, H.: Autoencoder as assistant supervisor: improving text representation for chinese social media text summarization. In:Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 2: Short Papers (2018)

    Google Scholar 

  7. Abigail, S., Peter, J., Christopher, D.: Get to the point: Summarization with pointer-generator networks. arXiv: 1704.04368 (2017)

    Google Scholar 

  8. Romain, P., Caiming, X., Richard, S.: A deep reinforced model for abstractive summarization. arXiv:1705.04304 (2017)

  9. Hu, B., Chen, Q., Zhu, F.: LCSTS: a large scale Chinese short text summarization dataset. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 1967–1972 (2015)

    Google Scholar 

  10. Ramesh, N., Bowen, Z., Cicero, D.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Computational Natural Language Learning (2016)

    Google Scholar 

  11. Oriol, V., Meire, F., Navdeep, J.: Pointer networks. In: Neural Information Processing Systems (2015)

    Google Scholar 

  12. Ma, S., Sun, X., Xu, J.: Improving semantic relevance for sequence-to-sequence learning of Chinese social media text summarization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 635–640 (2017)

    Google Scholar 

  13. Wang, W., Chang, B.: Graph-based dependency parsing with bidirectional LSTM. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, vol. 1: Long Papers (2016)

    Google Scholar 

  14. Tu, Z., Lu, Z., Liu, Y.: Modeling coverage for neural machine translation. In: Association for Computational Linguistics, ACL (2016).

    Google Scholar 

  15. Mi, H., Sankarab, B., Wang, Z.: Coverage embedding models for neural machine translation. In: Empirical Methods in Natural Language Processing (2016)

    Google Scholar 

  16. Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. In: Text Summarization Branches Out (2014)

    Google Scholar 

  17. Chen, X., Chan, Z., Gao, S.: Learning towards abstractive timeline summarization. In: Proceedings of the Twenty-Eighth International Joint Conference on Artifificial Intelligence, IJCAI-19, pp. 4939–4945 (2019)

    Google Scholar 

  18. Gao, S.,Chen, X., Li, P.: How to write summaries with patterns? learning towards abstractive summarization through prototype editing (2019)

    Google Scholar 

  19. Lin, J., Sun, X., Ma, S.: Global Encoding for Abstractive Summarization (2018)

    Google Scholar 

  20. Abadi, M., Barham, P., Jianmin, C.: Tensorflflow: a system for large-scale machine learning. OSDI 16, 265–283 (2016)

    Google Scholar 

  21. Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Srivastava, N., Hinton, G., Krizhevsky, A.: Dropout: a simple way to prevent neural networks from overfifitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  23. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Annual Conference on Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  24. Weber, N., Shekhar, L., Balasubramanian, N.: Controlling Decoding for More Abstractive Summaries with Copy-Based Networks (2018)

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant 61701019.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lijun Wei .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, L., Liu, Y., Li, J. (2021). Text Summarization as the Potential Technology for Intelligent Internet of Things. In: Lin, YB., Deng, DJ. (eds) Smart Grid and Internet of Things. SGIoT 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 354. Springer, Cham. https://doi.org/10.1007/978-3-030-69514-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69514-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69513-2

  • Online ISBN: 978-3-030-69514-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics