Abstract
The presence of misinformation and harmful content on social networks is an emerging problem that endangers public health. One of the most successful approaches for detecting, assessing, and providing prompt responses to this misinformation problem is Natural Language Processing (NLP) techniques based on semantic similarity. However, language constitutes one of the most significant barriers to address, denoting the need to develop multilingual tools for an effective fight against misinformation. This paper presents an approach for countering misinformation through a semantic-aware multilingual architecture. Due to the specificity of the task addressed, which involves assessing the level of similarity between a pair of texts in a multilingual scenario, we built an extension of the well-known Semantic Textual Similarity Benchmark (STSb) to 15 languages. This new dataset allows to fine-tune and evaluate multilingual models based on Transformers with a siamese network topology on monolingual and cross-lingual Semantic Textual Similarity (STS) tasks, achieving a maximum average Spearman correlation coefficient of 83.60%. We validate our proposal using the Covid-19 MLIA @ Eval Multilingual Semantic Search Task. The results reported demonstrate that semantic-aware multilingual architectures are successful at measuring the degree of similarity between pairs of texts, while broadening our understanding of the multilingual capabilities of this type of models. The results and the new multilingual STS Benchmark data presented and made publicly in this study constitute an initial step towards extending methods proposed in the literature that employ semantic similarity to combat misinformation at a multilingual level.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
ar, cs, de, en, es, fr, hi, it, ja, nl, pl, pt, ru, tr, zh-CN, zh-TW.
- 3.
Google Translator python package: https://pypi.org/project/google-trans-new/.
- 4.
Multilingual STSB available at https://github.com/Huertas97/Multilingual-STSB.
- 5.
Fine-tuned model available in Hugging Face hub.
- 6.
Covid-19 MLIA @ Eval initiative, http://eval.covid19-mlia.eu/task2/.
References
Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14. Association for Computational Linguistics, Vancouver, Canada (August 2017)
Cinelli, M., et al.: The COVID-19 social media infodemic. Sci. Rep. 10(1), 16598 (2020)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale (2020)
Dalgaard, P.: Introductory Statistics with R. Statistics and Computing, Springer, New York (2008). https://doi.org/10.1007/978-0-387-79054-1
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding (2019)
Estrada-Cuzcano, A., Alfaro-Mendives, K., Saavedra-Vásquez, V.: Disinformation y misinformation, posverdad y fake news: precisiones conceptuales, diferencias, similitudes y yuxtaposiciones. Información, cultura y sociedad 42, 93–106 (2020)
Gaglani, J., Gandhi, Y., Gogate, S., Halbe, A.: Unsupervised Whatsapp fake news detection using semantic search. In: 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 285–289 (2020)
Guo, X., Mirzaalian, H., Sabir, E., Jaiswal, A., Abd-Almageed, W.: Cord19sts: Covid-19 semantic textual similarity dataset (2020)
Ham, J., Choe, Y.J., Park, K., Choi, I., Soh, H.: Kornli and korsts: new benchmark datasets for Korean natural language understanding (2020)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015)
Huertas-Tato, J., Martín, A., Camacho, D.: Sml: a new semantic embedding alignment transformer for efficient cross-lingual natural language inference. arXiv preprint arXiv:2103.09635 (2021)
Humeau, S., Shuster, K., Lachaux, M.A., Weston, J.: Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring (2020)
Jwa, H., Oh, D., Park, K., Kang, J.M., Lim, H.: Exbake: automatic fake news detection model based on bidirectional encoder representations from transformers (bert). Appl. Sci. 9(19), 4062 (2019)
Kemp, S.: Digital 2020: october global statshot (2020). https://datareportal.com/reports/digital-2020-october-global-statshot
Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach (2019)
Martín, A., González-Carrasco, I., Rodriguez-Fernandez, V., Souto-Rico, M., Camacho, D., Ruiz-Mezcua, B.: Deep-sync: a novel deep learning-based tool for semantic-aware subtitling synchronisation. Neural Comput. Appl. 1–15 (2021). https://doi.org/10.1007/s00521-021-05751-y
Naeem, S.B., Bhatti, R.: The Covid-19 ‘infodemic’: a new front for information professionals. Health Inf. Libr. J. 37(3), 233–239 (2020)
Reimers, N., Beyer, P., Gurevych, I.: Task-oriented intrinsic evaluation of semantic textual similarity. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 87–96. The COLING 2016 Organizing Committee, Osaka, Japan (2016)
Reimers, N., Gurevych, I.: Sentence-bert: sentence embeddings using siamese bert-networks (2019)
Reimers, N., Gurevych, I.: Making monolingual sentence embeddings multilingual using knowledge distillation (2020)
Robertson, S., Walker, S., Hancock-Beaulieu, M.M., Gatford, M., Payne, A.: Okapi at trec-4. In: The Fourth Text REtrieval Conference (TREC-4), pp. 73–96. Gaithersburg, MD: NIST (January 1996)
Song, K., Tan, X., Qin, T., Lu, J., Liu, T.Y.: Mpnet: masked and permuted pre-training for language understanding (2020)
Vaswani, A., et al.: Attention is all you need (2017)
Vijjali, R., Potluri, P., Kumar, S., Teki, S.: Two stage transformer model for Covid-19 fake news detection and fact checking (2020)
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.: GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353–355. Association for Computational Linguistics, Brussels, Belgium (November 2018). https://doi.org/10.18653/v1/W18-5446, https://aclanthology.org/W18-5446
Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., Zhou, M.: Minilm: deep self-attention distillation for task-agnostic compression of pre-trained transformers (2020)
Williams, A., Nangia, N., Bowman, S.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122. Association for Computational Linguistics, New Orleans, Louisiana (June 2018)
Yang, Y., et al.: Multilingual universal sentence encoder for semantic retrieval (2019)
Acknowledgements
This research is funded by the project CIVIC: Intelligent characterisation of the veracity of the information related to COVID-19, granted by BBVA FOUNDATION GRANTS FOR SCIENTIFIC RESEARCH TEAMS SARS-CoV-2 and COVID-19, by the Ministry of Science and Education under PID2020-117263GB-100 (FightDIS) project, by Comunidad Autónoma de Madrid under S2018/ TCS-4566 (CYNAMON), S2017/BMD-3688 grant and by European Commission, 2020-EU-IA-0252 IBERIFIER - Iberian Digital Media Research and Fact-Checking Hub.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Huertas-García, Á., Huertas-Tato, J., Martín, A., Camacho, D. (2021). Countering Misinformation Through Semantic-Aware Multilingual Models. In: Yin, H., et al. Intelligent Data Engineering and Automated Learning – IDEAL 2021. IDEAL 2021. Lecture Notes in Computer Science(), vol 13113. Springer, Cham. https://doi.org/10.1007/978-3-030-91608-4_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-91608-4_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91607-7
Online ISBN: 978-3-030-91608-4
eBook Packages: Computer ScienceComputer Science (R0)