Abstract:
Nowadays, with the development of multimedia technology, rumor spreaders tend to produce false information with multi-modal content to attract the attention of the news r...Show MoreMetadata
Abstract:
Nowadays, with the development of multimedia technology, rumor spreaders tend to produce false information with multi-modal content to attract the attention of the news readers. However, it is challenging to capture implicit clues among multi-modal data to produce effective representations of false information detection. Moreover, as they tend to evade the detector, it is necessary to develop a robust detection modal that can resist multi-modal adversarial attacks, which is less studied in existing works. To address these issues, in this paper, we propose a novel multi-modal false information detection framework with adversarial training (MFAT). By adopting a pretrained multi-modal model and a cross-modal attention mechanism, MFAT is able to capture fine-grained element-level relationships and coarse-grained modal-level relationships simultaneously, and thus can better capture various multi-modal clues. Additionally, MFAT is also enhanced in robustness and generalization by defending against the adversarial attacks on multi-modal features. Experiments on two real-world datasets demonstrate that MFAT can significantly outperform state-of-the-art baselines. We also show the impacts of three types of multi-modal attacks, and verify that the invulnerability of the model is improved. Codes will be released upon acceptance.
Date of Conference: 18-23 July 2022
Date Added to IEEE Xplore: 30 September 2022
ISBN Information: