Abstract
Trolling describes a range of antisocial online behaviors that aim at disrupting the normal operation of online social networks and media. Existing approaches to combating trolling rely on human-based or automatic mechanisms for identifying trolls and troll posts. In this paper, we take a novel approach to the problem: our goal is to identify troll vulnerable posts, that is, posts that are potential targets of trolls, so as to prevent trolling before it happens. To this end, we define three natural axioms that a troll vulnerability metric must satisfy and introduce metrics that satisfy them. We then define the troll vulnerability prediction problem, where given a post we aim at predicting whether it is vulnerable to trolling. We construct models that use features from the content and the history of the post for the prediction. Our experiments with real data from Reddit demonstrate that our approach is successful in identifying a large fraction of the troll vulnerable posts.
Similar content being viewed by others
References
Adler BT, De Alfaro L, Mola-Velasco SM, Rosso P, West AG (2011) Wikipedia vandalism detection: combining natural language, metadata, and reputation features. In: Computational linguistics and intelligent text processing. Springer, Berlin, pp 277–288
Atwood J (2011) Suspension, ban or hellban? http://goo.gl/TxCGi7
Batista GE, Bazzan AL, Monard MC (2003) Balancing training data for automated annotation of keywords: a case study. In: WOB, pp 10–18
Blackburn J, Kwak H (2014) Stfu noob!: predicting crowdsourced decisions on toxic behavior in online games. In: Proceedings of the 23rd international conference on World wide web. ACM, pp 877–888
Buckels EE, Trapnell PD, Paulhus DL (2014) Trolls just want to have fun. Pers Individ Differ 67:97–102
Cambria E, Chandra P, Sharma A, Hussain A (2010) Do not feel the trolls. In: Proceedings of the 3rd International Workshop on Social Data on the Web. ISWC
Cheng J, Danescu-Niculescu-Mizil C, Leskovec J (2015) Antisocial behavior in online discussion communities. In: Proceedings of ICWSM
Chin SC, Street WN, Srinivasan P, Eichmann D (2010) Detecting wikipedia vandalism with active learning and statistical language models. In: Proceedings of the 4th Workshop on Information Credibility, WICOW ’10, pp 3–10
de-la-Peña-Sordo J, Pastor-López I, Ugarte-Pedrero X, Santos I, Bringas PG (2014) Anomalous user comment detection in social news websites. In: International Joint Conference SOCO’14-CISIS’14-ICEUTE’14—Bilbao, Spain, June 25th–27th, 2014, Proceedings, pp 517–526
Donath JS (1999) Identity and deception in the virtual community. Commun Cyberspace 1996:29–59
Hardaker C (2010) Trolling in asynchronous computer-mediated communication: from user discussions to academic definitions. J Politeness Res 6:215–242
Haveliwala TH (2002) Topic-sensitive pagerank. In: Proceedings of the eleventh international World Wide Web Conference, WWW 2002, May 7–11, 2002, Honolulu, Hawaii, pp 517–526
Jeh G, Widom J (2002) Simrank: a measure of structural-context similarity. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, July 23–26, 2002. Edmonton, Alberta, Canada, pp 538–543
Jeong S (2014) Does twitter have a secret weapon for silencing trolls? http://goo.gl/HcuL20
Kumar S, Spezzano F, Subrahmanian V (2014) Accurately detecting trolls in slashdot zoo via decluttering. In: 2014 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 188–195
Kumar S, Spezzano F, Subrahmanian VS (2015) VEWS: a wikipedia vandal early warning system. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp 607–616
Kunegis J, Lommatzsch A, Bauckhage C (2009) The slashdot zoo: mining a social network with negative edges. In: Proceedings of the 18th international conference on World Wide Web, WWW ’09, pp 741–750
Lamba H, Malik MM, Pfeffer J (2015) A tempest in a teacup? Analyzing firestorms on twitter. In: Proceedings of the 2015 IEEE/ACM international conference on advances in social networks analysis and mining, ASONAM, pp 17–24
Lawrence P, Sergey B, Motwani R, Winograd T (1998) The pagerank citation ranking: bringing order to the web. Technical report, Stanford University
Olariu A (2013) Repo for the insults detection challenge on kaggle.com. https://github.com/andreiolariu/kaggle-insults/
Ortega FJ, Troyano JA, Cruz FL, Vallejo CG, Enríquez F (2012) Propagation of trust and distrust for the detection of trolls in a social network. Comput Netw 56(12):2884–2895
Potthast M, Stein B, Gerling R (2008) Automatic vandalism detection in wikipedia. In: Advances in information retrieval, 30th European Conference on IR Research, ECIR 2008, Glasgow, UK, March 30–April 3, 2008, pp 663–668
Sood SO, Churchill EF, Antin J (2012) Automatic identification of personal insults on social news sites. J Am Soc Inf Sci Technol 63(2):270–285
Suler J (2004) The online disinhibition effect. Cyberpsychol Behavi 7(3):321–326
Tsantarliotis P, Pitoura E, Tsaparas P (2016) Troll vulnerability in online social networks. In: 2016 IEEE/ACM international conference on advances in social networks analysis and mining, ASONAM 2016, San Francisco, CA, USA, August 18–21, 2016, pp 1394–1396
Wu Z, Aggarwal CC, Sun J (2016) The troll-trust model for ranking in signed networks. In: Proceedings of the ninth ACM international conference on Web search and data mining. ACM, pp 447–456
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tsantarliotis, P., Pitoura, E. & Tsaparas, P. Defining and predicting troll vulnerability in online social media. Soc. Netw. Anal. Min. 7, 26 (2017). https://doi.org/10.1007/s13278-017-0445-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13278-017-0445-2