Abstract
Owing to the national network “clearing up” action, it has become increasingly important to detect false information by the use of deep learning technology. As social networks gradually presents a multimodal property, many scholars have devoted to multimodal fake news detection. However, the current multimodal achievements mainly focus on the fusion modeling between texts and images, while their consistencies are still in their infancy. This paper concentrates on the issue of how to extract effective features from texts and images, how to match modes in a more precise way, and subsequently proposes a novel fake news detection method. Especially, the models of Bert, Vgg, and Optical Character Recognition (OCR) are respectively adopted to reflect the textual features, the visual counterparts, as well as the corresponding embedded contents in the attachment. The overall model framework consists of four components: one fusion module and three matching modules, where the former one joints text and image features, and the latter three computes the corresponding similarities among textual, visual, and auxiliary modalities. Aligning them with different weights, and connecting them with a classifier, whether the news is fake or real can thus emerge. Comparative experiments embody the effectiveness of our model, which can reach 88.1%’s accuracy on the Chinese Weibo dataset and 91.7%’s accuracy on the English Twitter dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cheng, M., Nazarian, S., Bogdan, P.: VRoC: variational autoencoder-aided multi-task rumor classifier based on text. In: Proceedings of the web conference 2020. pp. 2892–2898 (2020)
Jawahar, G., Sagot, B., Seddah, D.: What does BERT learn about the structure of language? In: ACL 2019–57th Annual Meeting of the Association for Computational Linguistics (2019)
Jin, Z., Cao, J., Zhang, Y., Zhang, Y.: MCG-ICT at mediaeval 2015: Verifying multimedia use with a two-level classification model
Jin, Z., Cao, J., Zhang, Y., Zhou, J., Tian, Q.: Novel visual and statistical image features for microblogs news verification. IEEE Trans. Multimedia 19(3), 598–608 (2016)
Khattar, D., Goud, J.S., Gupta, M., Varma, V.: MVAE: multimodal variational autoencoder for fake news detection. In: The world Wide Web Conference. pp. 2915–2921 (2019)
Li, C., et al.: Pp-ocrv3: more attempts for the improvement of ultra lightweight OCR system. arXiv preprint arXiv:2206.03001 (2022)
Ma, J., et al.: Detecting rumors from microblogs with recurrent neural networks (2016)
Qazvinian, V., Rosengren, E., Radev, D., Mei, Q.: Rumor has it: identifying misinformation in microblogs. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. pp. 1589–1599 (2011)
Qi, P., Cao, J., Yang, T., Guo, J., Li, J.: Exploiting multi-domain visual information for fake news detection. In: 2019 IEEE International Conference on Data Mining (ICDM). pp. 518–527. IEEE (2019)
Singhal, S., Shah, R.R., Chakraborty, T., Kumaraguru, P., Satoh, S.: Spotfake: A multi-modal framework for fake news detection. In: 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM). pp. 39–47. IEEE (2019)
Wang, Y., et al.: EANN: event adversarial neural networks for multi-modal fake news detection. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 849–857 (2018)
Wu, Y., Zhan, P., Zhang, Y., Wang, L., Xu, Z.: Multimodal fusion with co-attention networks for fake news detection. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. pp. 2560–2569 (2021)
Yu, F., Liu, Q., Wu, S., Wang, L., Tan, T., et al.: A convolutional approach for misinformation identification. In: IJCAI. pp. 3901–3907 (2017)
Zhang, H., Fang, Q., Qian, S., Xu, C.: Multi-modal knowledge-aware event memory network for social media rumor detection. In: Proceedings of the 27th ACM International Conference on Multimedia. pp. 1942–1951 (2019)
Zhou, X., Wu, J., Zafarani, R.: Safe: similarity-aware multi-modal fake news detection. arXiv preprint arXiv:2003.04981 (2020)
Acknowledgements
This research work was funded by the Beijing Social Science Foundation (21XCCC013).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Guo, Y., Li, B., Ge, H., Di, C. (2023). An Auxiliary Modality Based Text-Image Matching Methodology for Fake News Detection. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14255. Springer, Cham. https://doi.org/10.1007/978-3-031-44210-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-44210-0_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44209-4
Online ISBN: 978-3-031-44210-0
eBook Packages: Computer ScienceComputer Science (R0)