Abstract
Pretrained language models(PLMs) and additional features have been used in rumor detection with excellent performance. However, on the one hand, some recent studies find one of its critical challenges is the significant gap of objective forms in pretraining and fine-tuning, which restricts taking full advantage of knowledge in PLMs. On the other hand, text contents are condensed and full of knowledge entities, but existing methods usually focus on the textual contents and social contexts, and ignore external knowledge of text entities. In this paper, to address these limitations, we propose a Prompt-based External Knowledge Integration Network(PEKIN) for rumor detection, which incorporates both prior knowledges of rumor detection tasks and external knowledge of text entities. For one thing, unlike the conventional “pretrain, finetune" paradigm, we propose a prompt-based method, which brings prior knowledge to help PLMs understand the rumor detection task and better stimulate the rich knowledge distributed in PLMs. For another, we identify entities mentioned in the text and then get these entities’ annotations from a knowledge base. After that, we use these annotations contexts as external knowledge to provide complementary information. Experiments on three datasets showed that PEKIN outperformed all compared models, significantly beating the old state-of-the-art on Weibo dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Allport, G.W., Postman, L.: The psychology of rumor. Russel & Russell (1965)
Devlin, J., et al.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Risch, J., Krestel, R.: Bagging BERT models for robust aggression identification. In: Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (2020)
Han, X., et al.: Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259 (2021)
Zhong, R, et al.: Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. arXiv preprint arXiv:2104.04670 (2021)
Schick, T., Schütze, H.: Few-shot text generation with natural language instructions. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Schick, T., Schütze, H.: Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 (2020)
Ma, J., et al.: Detecting rumors from microblogs with recurrent neural networks, vol. 3818 (2016)
Ma, J., et al.: An attention-based rumor detection model with tree-structured recursive neural networks. ACM Trans. Intell. Syst. Technol. (TIST) 11(4), 1–28 (2020)
DiFonzo, N.: Am. Psychol. Assoc. Prashant Bordia. Social and organizational approaches, Rumor psychology (2007)
Del Vicario, M., et al.: Polarization and fake news: Early warning of potential misinformation targets. ACM Trans. Web (TWEB) 13(2), 1–22 (2019)
Meel, P., Vishwakarma, D.S.: Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst. Appli. 153, 112986 (2020)
Wang, Z., Guo, Y.: Rumor events detection enhanced by encoding sentimental information into time series division and word representations. Neurocomputing 397, 224–243 (2020)
Kumar, S., Carley, K.M.: Tree lstms with convolution units to predict stance and rumor veracity in social media conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)
Bian, T., et al.: Rumor detection on social media with bi-directional graph convolutional networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(01) (2020)
Zhang, Q., et al.: Reply-aided detection of misinformation via bayesian deep learning. In: The World Wide Web Conference (2019)
Riedel, B., et al.: A simple but tough-to-beat baseline for the Fake News Challenge stance detection task. arXiv preprint arXiv:1707.03264 (2017)
Lu, Y-J., Li, C-T.: GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648 (2020)
Brown, T., et al.: Language models are few-shot learners. In: Advances in neural Information Processing Systems, vol. 33, pp, 1877–1901 (2020)
Gao, T., Fisch, A., Chen, D.: Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723 (2020)
Chen, X., et al.: Lightner: A lightweight generative framework with prompt-guided attention for low-resource ner. arXiv preprint arXiv:2109.00720 (2021)
Hu, S., et al.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:2108.02035 (2021)
Inui, K., et al.: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)
Talmor, A., et al.: oLMpics-on what language model pre-training captures. Trans. Assoc. Comput Ling. 8, 743–758 (2020)
Teng, Z., Vo,D-T., Zhang, Y.: Context-sensitive lexicon features for neural sentiment analysis. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2016)
Shin, B., Lee, T., Choi, J.D.: Lexicon integrated CNN models with attention for sentiment analysis. arXiv preprint arXiv:1610.06272 (2016)
Kumar, A., Kawahara, D., Kurohashi, S.: Knowledge-enriched two-layered attention network for sentiment analysis. arXiv preprint arXiv:1805.07819 (2018)
Yang, F., Mukherjee, A., Dragut, E.: Satirical news detection and analysis using attention mechanism and linguistic features. arXiv preprint arXiv:1709.01189 (2017)
Ma, J., Gao, W., Wong, K-F.: Detect rumors in microblog posts using propagation structure via kernel learning. In: Association for Computational Linguistics (2017)
Ma, J., et al.: Detect rumors using time series of social context information on microblogging websites. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (2015)
Tu, K., et al.: Rumor2vec: A rumor detection framework with joint text and propagation structure representation learning. Inf. Sci. 560, 137–151 (2021)
Liu, Y., et al.: 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 5 (2019)
Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
Khoo, L.M.S., et al.: Interpretable rumor detection in microblogs by attending to user interactions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(05) (2020)
Wu, Y. et al.: Weibo rumor recognition based on communication and stacking ensemble learning. In: Discrete Dynamics in Nature and Society 2020 (2020)
Geng, Y., Lin, Z., Fu, P., Wang, W.: Rumor detection on social media: A multi-view model using self-attention mechanism. In: Rodrigue, J.M.F., et al. (eds.) ICCS 2019. LNCS, vol. 11536, pp. 339–352. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22734-0_25
Acknowledgements
This work is partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No. XDC02060400
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, Z., Liu, H., Li, K., Wang, Y., Liu, Z., Zhang, X. (2022). PEKIN: Prompt-Based External Knowledge Integration Network for Rumor Detection on Social Media. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol 13630. Springer, Cham. https://doi.org/10.1007/978-3-031-20865-2_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-20865-2_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20864-5
Online ISBN: 978-3-031-20865-2
eBook Packages: Computer ScienceComputer Science (R0)