ABSTRACT
The spread of hate speech on social media platforms has become a rising concern in recent years. Understanding the spread of hate is crucial for mitigating its harmful effects and fostering a healthier online environment. In this paper, we propose a new model to capture the evolution of toxicity in a network – if a tweet with a certain toxicity (hatefulness) is posted, how much toxic a social network will become after a given number of rounds. We compute a toxicity score for each tweet, indicating the extent of the hatefulness of that tweet.
Toxicity spread has not been adequately addressed in the existing literature. The two popular paradigms for modelling information spread, namely the Susceptible-Infected-Recovered (SIR) and its variants, as well as the spreading-activation models (SPA), are not suitable for modelling toxicity spread. The first paradigm employs a threshold and categorizes tweets as either toxic or non-toxic, while the second paradigm treats hate as energy and applies energy-conversion principles to model its propagation. Through analysis of a Twitter dataset consisting of 19.58 million tweets, we observe that the total toxicity, as well as the average toxicity of original tweets and retweets in the network, does not remain constant but rather increases over time.
In this paper, we propose a new method for toxicity spread. First, we categorize users into three distinct groups: Amplifiers, Attenuators, and Copycats. These categories are assigned based on the exchange of toxicity by a user, with Amplifiers sending out more toxicity than they receive, Attenuators experiencing a higher influx of toxicity compared to what they generate, and Copycats simply mirroring the hate they receive. We perform extensive experimentation on Barabási–Albert (BA) graphs, as well as subgraphs extracted from the Twitter dataset. Our model is able to replicate the patterns of toxicity.
- Sai Saketh Aluru, Binny Mathew, Punyajoy Saha, and Animesh Mukherjee. 2020. Deep learning models for multilingual hate speech detection. arXiv preprint arXiv:2004.06465 (2020).Google Scholar
- Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In WWW. 759–760.Google Scholar
- Albert-László Barabási and Réka Albert. 1999. Emergence of scaling in random networks. science 286, 5439 (1999), 509–512.Google Scholar
- John Cannarella and Joshua A Spechler. 2014. Epidemiological modeling of online social network dynamics. arXiv preprint arXiv:1401.4208 (2014).Google Scholar
- Koustuv Dasgupta, Rahul Singh, Balaji Viswanathan, Dipanjan Chakraborty, Sougata Mukherjea, Amit A Nanavati, and Anupam Joshi. 2008. Social ties and their relevance to churn in mobile telecom networks. In Proceedings of the 11th international conference on Extending database technology: Advances in database technology. 668–677.Google ScholarDigital Library
- Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media, Vol. 11. 512–515.Google ScholarCross Ref
- Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In WWW. 29–30.Google Scholar
- Bojan Evkoski, Andraž Pelicon, Igor Mozetič, Nikola Ljubešić, and Petra Kralj Novak. 2022. Retweet communities reveal the main sources of hate speech. Plos one 17, 3 (2022), e0265602.Google ScholarCross Ref
- Ling Feng, Yanqing Hu, Baowen Li, H Eugene Stanley, Shlomo Havlin, and Lidia A Braunstein. 2015. Competing for attention in social media under information overload conditions. PloS one 10, 7 (2015).Google Scholar
- Abdurahman Maarouf, Nicolas Pröllochs, and Stefan Feuerriegel. 2022. The Virality of Hate Speech on Social Media. arXiv preprint arXiv:2210.13770 (2022).Google Scholar
- Frank J Massey Jr. 1951. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American statistical Association 46, 253 (1951), 68–78.Google ScholarCross Ref
- Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of hate speech in online social media. In Proceedings of the 10th ACM Conference on Web Science. 173–182.Google ScholarDigital Library
- Maya Mirchandani. 2018. Digital hatred, real violence: Majoritarian radicalisation and social media in India. ORF Occasional Paper 167 (2018), 1–30.Google Scholar
- Khouloud Mnassri, Praboda Rajapaksha, Reza Farahbakhsh, and Noel Crespi. 2022. BERT-based Ensemble Approaches for Hate Speech Detection. In GLOBECOM 2022-2022 IEEE Global Communications Conference. IEEE, 4649–4654.Google Scholar
- Seema Nagar, Sameer Gupta, C. S. Bahushruth, Ferdous Ahmed Barbhuiya, and Kuntal Dey. 2021. Homophily - a Driving Factor for Hate Speech on Twitter. In Complex Networks & Their Applications X - Volume 2, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021, Madrid, Spain, November 30 - December 2, 2021(Studies in Computational Intelligence, Vol. 1016), Rosa María Benito, Chantal Cherifi, Hocine Cherifi, Esteban Moro, Luis M. Rocha, and Marta Sales-Pardo (Eds.). Springer, 78–88. https://doi.org/10.1007/978-3-030-93413-2_7Google ScholarCross Ref
- Seema Nagar, Sameer Gupta, Ferdous Ahmed Barbhuiya, and Kuntal Dey. 2022. Capturing the Spread of Hate on Twitter Using Spreading Activation Models. In Complex Networks & Their Applications X: Volume 2, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021 10. Springer, 15–27.Google ScholarCross Ref
- Juan Manuel Pérez, Franco M Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Santiago Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, 2023. Assessing the impact of contextual information in hate speech detection. IEEE Access 11 (2023), 30575–30590.Google ScholarCross Ref
- Manoel Ribeiro, Pedro Calais, Yuri dos Santos, Virgilio Almeida, and Wagner Meira Jr. 2017. "Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. MIS2 Workshop at WSDM’2018 (2017).Google Scholar
- Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virgílio AF Almeida, and Wagner Meira Jr. 2017. " Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. arXiv preprint arXiv:1801.00317 (2017).Google Scholar
- Punyajoy Saha, Kiran Garimella, Narla Komal Kalyan, Saurabh Kumar Pandey, Pauras Mangesh Meher, Binny Mathew, and Animesh Mukherjee. 2023. On the rise of fear speech in online social media. Proceedings of the National Academy of Sciences 120, 11 (2023), e2212270120.Google ScholarCross Ref
- Punyajoy Saha, Binny Mathew, Kiran Garimella, and Animesh Mukherjee. 2021. “Short is the Road that Leads from Fear to Hate”: Fear Speech in Indian WhatsApp Groups. In Proceedings of the Web conference 2021. 1110–1121.Google ScholarDigital Library
- Kristin L Sainani. 2012. Dealing with non-normal data. Pm&r 4, 12 (2012), 1001–1005.Google Scholar
- Samuel Sanford Shapiro and Martin B Wilk. 1965. An analysis of variance test for normality (complete samples). Biometrika 52, 3/4 (1965), 591–611.Google ScholarCross Ref
- Chao Wang, Xu-ying Yang, Ke Xu, and Jian-feng MA. 2014. SEIR-based model for the information spreading over SNS. Acta Electonica Sinica 42, 11 (2014), 2325.Google Scholar
- Qiyao Wang, Zhen Lin, Yuehui Jin, Shiduan Cheng, and Tan Yang. 2015. ESIS: emotion-based spreader–ignorant–stifler model for information diffusion. Knowledge-based systems 81 (2015), 46–55.Google Scholar
- William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the second workshop on language in social media. 19–26.Google ScholarDigital Library
- Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In NAACL student research workshop. 88–93.Google Scholar
- Ruzhi Xu, Heli Li, and Changming Xing. 2013. Research on information dissemination model for social networking services. International Journal of Computer Science and Application (IJCSA) 2, 1 (2013), 1–6.Google Scholar
- DING Xuejun. 2015. Research on propagation model of public opinion topics based on SCIR in microblogging. Computer Engineering and Applications8 (2015), 6.Google Scholar
- Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15. Springer, 745–760.Google ScholarDigital Library
Index Terms
- Analysing the Spread of Toxicity on Twitter
Recommendations
Spread of Hate Speech in Online Social Media
WebSci '19: Proceedings of the 10th ACM Conference on Web ScienceHate speech is considered to be one of the major issues currently plaguing the online social media. With online hate speech culminating in gruesome scenarios like the Rohingya genocide in Myanmar, anti-Muslim mob violence in Sri Lanka, and the ...
Predicting hate intensity of twitter conversation threads
AbstractTweets are the most concise form of communication in online social media. Wherein a single tweet has the potential to make or break the discourse of the conversation. Online hate speech is more accessible than ever, and stifling its propagation ...
A longitudinal study of the top 1% toxic Twitter profiles
WebSci '23: Proceedings of the 15th ACM Web Science Conference 2023Toxicity is endemic to online social networks (OSNs) including Twitter. It follows a Pareto-like distribution where most of the toxicity is generated by a very small number of profiles and as such, analyzing and characterizing these “toxic profiles” is ...
Comments