Abstract
Applying automatic text summarization technology to Internet of Things can save network cost and improve computing speed. However, the models of generated text summarization are always using sequence-to-sequence model with attention mechanism. Unluckily, this method for abstractive summarization has two main shortcomings: first, they fail to address unknown words problems, second, their generated summaries are not very readable because of repetition. In our work, our goal is to enhance the semantic coherence of summaries for original texts. In order to this end, we propose a new model that augments the traditional model in two ways. First, we apply semantic relevance to pointer-generator network to get high similarity between source texts and our summaries. Second, we change the mechanism of coverage and use it to pointer-generator network to discourage repetition. Following other works, we apply our new model to Chinese social media dataset LCSTS. Our experiments suggested that our new model outperforms current abstractive baseline systems on the dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dragomir, R., Timothy, A., Sasha, B.: MEAD - a platform for multidocument multilingual text summarization. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation, LREC (2004)
Kristian, W., Mirella, L.: Automatic generation of story highlights. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 565–574 (2010)
Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, vol. 1: Long Papers (2016)
Dzmitry, B., Kyunghyun, C., Yoshua, B.: Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473 (2014)
Alexander, M., Sumit, C., Jason, W.: A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 379–389 (2015)
Ma, S., Sun, X., Lin, J., Wang, H.: Autoencoder as assistant supervisor: improving text representation for chinese social media text summarization. In:Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 2: Short Papers (2018)
Abigail, S., Peter, J., Christopher, D.: Get to the point: Summarization with pointer-generator networks. arXiv: 1704.04368 (2017)
Romain, P., Caiming, X., Richard, S.: A deep reinforced model for abstractive summarization. arXiv:1705.04304 (2017)
Hu, B., Chen, Q., Zhu, F.: LCSTS: a large scale Chinese short text summarization dataset. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 1967–1972 (2015)
Ramesh, N., Bowen, Z., Cicero, D.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Computational Natural Language Learning (2016)
Oriol, V., Meire, F., Navdeep, J.: Pointer networks. In: Neural Information Processing Systems (2015)
Ma, S., Sun, X., Xu, J.: Improving semantic relevance for sequence-to-sequence learning of Chinese social media text summarization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, pp. 635–640 (2017)
Wang, W., Chang, B.: Graph-based dependency parsing with bidirectional LSTM. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL, vol. 1: Long Papers (2016)
Tu, Z., Lu, Z., Liu, Y.: Modeling coverage for neural machine translation. In: Association for Computational Linguistics, ACL (2016).
Mi, H., Sankarab, B., Wang, Z.: Coverage embedding models for neural machine translation. In: Empirical Methods in Natural Language Processing (2016)
Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. In: Text Summarization Branches Out (2014)
Chen, X., Chan, Z., Gao, S.: Learning towards abstractive timeline summarization. In: Proceedings of the Twenty-Eighth International Joint Conference on Artifificial Intelligence, IJCAI-19, pp. 4939–4945 (2019)
Gao, S.,Chen, X., Li, P.: How to write summaries with patterns? learning towards abstractive summarization through prototype editing (2019)
Lin, J., Sun, X., Ma, S.: Global Encoding for Abstractive Summarization (2018)
Abadi, M., Barham, P., Jianmin, C.: Tensorflflow: a system for large-scale machine learning. OSDI 16, 265–283 (2016)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
Srivastava, N., Hinton, G., Krizhevsky, A.: Dropout: a simple way to prevent neural networks from overfifitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Annual Conference on Neural Information Processing Systems, pp. 3104–3112 (2014)
Weber, N., Shekhar, L., Balasubramanian, N.: Controlling Decoding for More Abstractive Summaries with Copy-Based Networks (2018)
Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grant 61701019.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Wei, L., Liu, Y., Li, J. (2021). Text Summarization as the Potential Technology for Intelligent Internet of Things. In: Lin, YB., Deng, DJ. (eds) Smart Grid and Internet of Things. SGIoT 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 354. Springer, Cham. https://doi.org/10.1007/978-3-030-69514-9_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-69514-9_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69513-2
Online ISBN: 978-3-030-69514-9
eBook Packages: Computer ScienceComputer Science (R0)