skip to main content
10.1145/3627377.3627388acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicbdtConference Proceedingsconference-collections
research-article

Graph Attention Network for Short Text Type News

Authors Info & Claims
Published:04 December 2023Publication History

ABSTRACT

In the era of self-media, the spread of fake news is more widespread and rapid because anyone can become an editor of the news. It leads to a large number of short news texts. However, the unchecked dissemination and sharing of news information have led to a continuous emergence of fake news events. It not only misleads readers but also has a detrimental impact on society. Besides, short news texts are semantically sparse and need more contextual solid connections. It is difficult to extract text features and achieve high error detection efficiency. Therefore, this paper has proposed a heterogeneous graph attention network that includes multiple text-related features. The network breaks the traditional way of only connecting text features in graph neural networks, extracts various external knowledge and text feature information from the news to construct the graph network, and establishes connections for different text features to enhance semantic understanding. Additionally, connecting external knowledge bases is helpful in eliminating news entity word ambiguities. Then, the heterogeneous graph is embedded into a dual-attention mechanism at both node and pattern levels, capturing the importance of different adjacent nodes, reducing the weight of noisy nodes, and accurately identifying valid information. Experimental results demonstrate that the proposed method in this paper outperforms better than several baseline models, such as TextGCN in terms of accuracy.

References

  1. Vosoughi S, Roy D, Aral S. The spread of true and false news online[J]. science, 2018, 359(6380): 1146-1151.Google ScholarGoogle Scholar
  2. Zhou X, Zafarani R, Shu K, Fake news: Fundamental theories, detection strategies and challenges[C]//Proceedings of the twelfth ACM international conference on web search and data mining. 2019: 836-837.Google ScholarGoogle Scholar
  3. Dai Z, Yang Z, Yang Y, Transformer-XL: Attentive Language Models beyond a Fixed-Length Context[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 2978-2988.Google ScholarGoogle Scholar
  4. Kenton J D M W C, Toutanova L K. Bert: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of naacL-HLT. 2019, 1: 2.Google ScholarGoogle Scholar
  5. Zhang Z, Han X, Liu Z, ERNIE: Enhanced Language Representation with Informative Entities[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 1441-1451.Google ScholarGoogle Scholar
  6. Ren F, She T. Utilizing external knowledge to enhance semantics in emotion detection in conversation[J]. IEEE Access, 2021, 9: 154947-154956.Google ScholarGoogle ScholarCross RefCross Ref
  7. Zhuang L, Wayne L, Ya S, A robustly optimized BERT pre-training approach with post-training[C]//Proceedings of the 20th chinese national conference on computational linguistics. 2021: 1218-1227.Google ScholarGoogle Scholar
  8. Tenney I, Das D, Pavlick E. BERT Rediscovers the Classical NLP Pipeline[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 4593-4601.Google ScholarGoogle Scholar
  9. Wang S, Guo Y, Wang Y, SMILES-BERT: large scale unsupervised pre-training for molecular property prediction[C]//Proceedings of the 10th ACM international conference on bioinformatics, computational biology and health informatics. 2019: 429-436.Google ScholarGoogle Scholar
  10. Nguyen T P, Razniewski S, Weikum G. Advanced semantics for commonsense knowledge extraction[C]//Proceedings of the Web Conference 2021. 2021: 2636-2647.Google ScholarGoogle Scholar
  11. Vaswani A, Shazeer N, Parmar N, Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.Google ScholarGoogle Scholar
  12. Peters M E, Neumann M, Iyyer M, Deep contextualized word representations[J]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).2018: 2227–2237.Google ScholarGoogle Scholar
  13. Radford A, Narasimhan K, Salimans T, Improving language understanding by generative pre-training[J]. 2018.Google ScholarGoogle Scholar
  14. Velickovic P, Cucurull G, Casanova A, Graph attention networks[J]. stat, 2017, 1050(20): 10.48550.Google ScholarGoogle Scholar
  15. Yao L, Mao C, Luo Y. Graph convolutional networks for text classification[C]//Proceedings of the AAAI conference on artificial intelligence. 2019, 33(01): 7370-7377.Google ScholarGoogle Scholar
  16. Zhang Y, Yu X, Cui Z, Every Document Owns Its Structure: Inductive Text Classification via Graph Neural Networks[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 334-339.Google ScholarGoogle Scholar
  17. Ding K, Wang J, Li J, Be more with less: Hypergraph attention networks for inductive text classification[C]//2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020. Association for Computational Linguistics (ACL), 2020: 4927-4936.Google ScholarGoogle Scholar
  18. Linmei H, Yang T, Shi C, Heterogeneous graph attention networks for semi-supervised short text classification[C]//Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 2019: 4821-4830.Google ScholarGoogle Scholar
  19. Ren Y, Zhang J. Fake news detection on news-oriented heterogeneous information networks through hierarchical graph attention[C]//2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021: 1-8.Google ScholarGoogle Scholar
  20. Mehta N, Pacheco M L, Goldwasser D. Tackling Fake News Detection by Continually Improving Social Context Representations using Graph Neural Networks[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 1363-1380.Google ScholarGoogle Scholar

Index Terms

  1. Graph Attention Network for Short Text Type News
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          ICBDT '23: Proceedings of the 2023 6th International Conference on Big Data Technologies
          September 2023
          441 pages
          ISBN:9798400707667
          DOI:10.1145/3627377

          Copyright © 2023 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 December 2023

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited
        • Article Metrics

          • Downloads (Last 12 months)14
          • Downloads (Last 6 weeks)3

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format