skip to main content
research-article

Chasing Offensive Conduct in Social Networks: A Reputation-Based Practical Approach for Frisber

Published: 07 December 2015 Publication History

Abstract

Social network users take advantage of anonymity to share rumors or gossip about others, making it important to provide means to report offensive conduct. This article presents a proposal to automatically manage these reports. We consider not only the users’ public behavior, but also private messages between users. The automatic approach is based, in both cases, on the reporters’ reputation along with other metrics intrinsic to social networks. Promising results from adopting the proposed reporting methods on Frisber, a geolocalized social network in production, are presented as well as some experiments based on real data extracted from Frisber.

References

[1]
Nisha Aggarwal. 2014. Mining YouTube Metadata for Detecting Privacy Invading Harassment and Misdemeanour Videos. Master’s thesis. Indraprastha Institute of Information Technology, New Delhi, India.
[2]
Vinicius Almendra and Denis Enăchescu. 2011. A supervised learning process to elicit fraud cases in online auction sites. In Proceedings of the 2011 13th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. IEEE, Los Alamitos, CA, 168--174.
[3]
Fabrício Benevenuto, Tiago Rodrigues, Virgílio Almeida, Jussara Almeida, and Marcos Gonçalves. 2009. Detecting spammers and content promoters in online video social networks. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, 620--627.
[4]
Jeremy Blackburn, Nicolas Kourtellis, John Skvoretz, Matei Ripeanu, and Adriana Iamnitchi. 2014. Cheating in online games: A social network perspective. ACM Transactions on Internet Technology 13, 3, Article 9, 25 pages.
[5]
Ying Chen. 2011. Detecting Offensive Language in Social Medias for Protection of Adolescent Online Safety. Master’s thesis. Department of Computer Science and Engineering, The Pennsylvania State University, State College, PA.
[6]
Luca De Alfaro, Ashutosh Kulshreshtha, Ian Pye, and B. Thomas Adler. 2011. Reputation systems for open collaboration. Communications of the ACM 54, 8, 81--87.
[7]
Ronald Deibert, Jonathan L. Zittrain, John Palfrey, and Rafal Rohozinski. 2012. Access Contested: Security, Identity, and Resistance in Asian Cyberspace. MIT Press, Cambridge, MA.
[8]
Ginés Dólera Tormo, Félix Gómez Mármol, and Gregorio Martínez Pérez. 2014. Towards the integration of reputation management in OpenID. Computer Standards and Interfaces 36, 3, 438--453.
[9]
Rebecca Dredge, John Gleeson, and Xochitl de la Piedad Garcia. 2014. Cyberbullying in social networking sites: An adolescent victim’s perspective. Computers in Human Behavior 36, 13--20.
[10]
Facebook. 2012. What happens after you click “Report.” Retrieved October 22, 2015 from https://www.facebook.com/notes/432670926753695.
[11]
Robert Faris, Stephanie Wang, and John Palfrey. 2008. Censorship 2.0. Innovations 3, 2, 165--187.
[12]
Frisber. 2015. Main website of Frisber. Retrieved October 22, 2015 from http://frisber.inf.um.es.
[13]
Manuel Gil Pérez, Félix Gómez Mármol, Gregorio Martínez Pérez, and Antonio F. Skarmeta Gómez. 2013. RepCIDN: A reputation-based collaborative intrusion detection network to lessen the impact of malicious alarms. Journal of Network and Systems Management 21, 1, 128--167.
[14]
Félix Gómez Mármol, Manuel Gil Pérez, and Gregorio Martínez Pérez. 2014. Reporting offensive content in social networks: Toward a reputation-based assessment approach. IEEE Internet Computing 18, 2, 32--40.
[15]
Google Inc. 2015. Google App Engine. Retrieved October 22, 2015 from http://appengine.google.com. (2015).
[16]
Lisa M. Jones, Kimberly J. Mitchell, and David Finkelhor. 2012. Trends in youth Internet victimization: Findings from three youth Internet safety surveys 2000-2010. Journal of Adolescent Health 50, 2, 179--186.
[17]
Raquel Justo, Thomas Corcoran, Stephanie M. Lukin, Marilyn Walker, and M. Ins Torres. 2014. Extracting relevant knowledge for the detection of sarcasm and nastiness in the social Web. Knowledged-Based Systems 69, 124--133.
[18]
Kazuhiro Kazama, Miyuki Imada, and Keiichiro Kashiwagi. 2012. Characteristics of information diffusion in blogs, in relation to information source type. Neurocomputing 76, 1, 84--92.
[19]
LinkedIn Corp. 2015. Reporting inappropriate content, messages, or safety concerns. Frequently Asked Questions. Retrieved October 22, 2015 from https://help.linkedin.com/app/safety/answers/detail/a_id/146. (2015).
[20]
Amir H. Razavi, Diana Inkpen, Sasha Uritsky, and Stan Matwin. 2010. Offensive language detection using multi-level classification. In Advances in Artificial Intelligence. Lecture Notes in Computer Science, Vol. 6085. Springer, Berlin, 16--27.
[21]
Yigal D. Rubinstein, Mitu Singh, Qing Guo, Arturo Bejar, and Arda Cebeci. 2013. Content report management in a social networking system. US patent 2013/0151609 A1.
[22]
Tanaya Saha and Akancha Srivastava. 2014. Indian women at risk in the cyber space: A conceptual model of reasons of victimization. International Journal of Cyber Criminology 8, 1, 57--67.
[23]
Mathangi Subramanian. 2014. Bullying: The ultimate teen guide. Rowman & Littlefield Publishers, Inc., New York, NY.
[24]
Prashant Tomer, Shrikant Lade, Manish Kumar Suman, and Deepak Patel. 2013. On line social network content and image filtering, classifications. International Journal of Engineering Research and Science & Technology 2, 4, 42--55.
[25]
Twitter, Inc. 2014a. Reporting abusive behavior. Retrieved October 22, 2015 from https://support.twitter.com/articles/20169998. (2014).
[26]
Twitter, Inc. 2014b. Someone on Twitter is engaging in abusive or harassing behavior. Retrieved October 22, 2015 from https://support.twitter.com/forms/abusiveuser. (2014).
[27]
Niklas Udd. 2008. Automatic Filtering of Abuse Reports. Master’s thesis. Department of Computer and Systems Sciences, Stockholm University and Royal Institute of Technology, Stockholm, Sweden.
[28]
Kathleen Van Royen, Karolien Poels, Walter Daelemans, and Heidi Vandebosch. 2015. Automatic monitoring of cyberbullying on social networking sites: From technological feasibility to desirability. Telematics and Informatics 32, 1, 89--97.
[29]
Ellen Wauters, Eva Lievens, and Peggy Valcke. 2014. Towards a better protection of social media users: A legal perspective on the terms of use of social networking sites. International Journal of Law and Information Technology 22, 3, 254--294.
[30]
James D. Work, Allen Blue, and Reid Hoffman. 2011. Method and system for reputation evaluation of online users in a social networking scheme. US patent 8,010,460 B2.
[31]
Guang Xiang, Bin Fan, Ling Wang, Jason Hong, and Carolyn Rose. 2012. Detecting offensive tweets via topical feature discovery over a large scale Twitter corpus. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management. ACM, New York, NY, 1980--1984.

Cited By

View all
  • (2023)Support towards emergency event processing via fine-grained analysis on users' expressionsAslib Journal of Information Management10.1108/AJIM-05-2022-026376:2(212-232)Online publication date: 5-Jan-2023
  • (2021)Towards a chatbot for evidence gathering on the dark webProceedings of the 3rd Conference on Conversational User Interfaces10.1145/3469595.3469598(1-3)Online publication date: 27-Jul-2021
  • (2020)C3-Sex: A Conversational Agent to Detect Online Sex OffendersElectronics10.3390/electronics91117799:11(1779)Online publication date: 27-Oct-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Internet Technology
ACM Transactions on Internet Technology  Volume 15, Issue 4
Special Issue on Trust in Social Networks and Systems
December 2015
88 pages
ISSN:1533-5399
EISSN:1557-6051
DOI:10.1145/2851090
  • Editor:
  • Munindar P. Singh
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 December 2015
Accepted: 01 June 2015
Revised: 01 April 2015
Received: 01 August 2014
Published in TOIT Volume 15, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Social networks
  2. abusive user
  3. offensive content
  4. reporting system
  5. trust assessment

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • Spanish MICINN (project DHARMA, Dynamic Heterogeneous Threats Risk Management and Assessment)
  • European Commission (FEDER/ERDF)
  • Research Groups of Excellence granted by the Séneca Foundation

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Support towards emergency event processing via fine-grained analysis on users' expressionsAslib Journal of Information Management10.1108/AJIM-05-2022-026376:2(212-232)Online publication date: 5-Jan-2023
  • (2021)Towards a chatbot for evidence gathering on the dark webProceedings of the 3rd Conference on Conversational User Interfaces10.1145/3469595.3469598(1-3)Online publication date: 27-Jul-2021
  • (2020)C3-Sex: A Conversational Agent to Detect Online Sex OffendersElectronics10.3390/electronics91117799:11(1779)Online publication date: 27-Oct-2020
  • (2020)Veracity assessment of online dataDecision Support Systems10.1016/j.dss.2019.113132129:COnline publication date: 1-Feb-2020
  • (2017)Using stratified privacy for personal reputation defense in online social networksProceedings of the Symposium on Applied Computing10.1145/3019612.3019813(1037-1044)Online publication date: 3-Apr-2017
  • (2017)Shall I post this now? Optimized, delay-based privacy protection in social networksKnowledge and Information Systems10.1007/s10115-016-1010-452:1(113-145)Online publication date: 1-Jul-2017

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media