skip to main content
10.1145/2872518.2888601acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
abstract

Detecting and Mitigating the Effect of Manipulated Reputation on Online Social Networks

Published:11 April 2016Publication History

ABSTRACT

In recent times, online social networks (OSNs) are being used not only to communicate but to also create a public/social image. Artists, celebrities and even common people are using social networks to build their brand value and gain more visibility either amongst a restricted set of people or public. In order to enable user to connect to other users in the OSN and gain following and appreciation from them, various OSNs provide different social metrics to the user such as Facebook likes, Twitter followers and Tumblr reblogs. Hence, these metrics give a sense of social reputation to the OSN user. As more users are trying to leverage social media to create a brand value and become more influential, spammers are luring such users to help manipulate their social reputation with the help of paid service (black markets) or collusion networks. In this work, we aim to build a robust alternate social reputation system and detect users with manipulated social reputation. In order to do so, we first start by understanding the underlying structure of various sources of crowdsourced social reputation manipulation like blackmarkets, supply-driven microtask websites and collusion networks. We then build a mechanism for an early detection of users with manipulated social reputation. Our initial results are encouraging and substantiate the possibility of a robust social reputation system.

References

  1. A. Aggarwal and P. Kumaraguru. What they do in shadows: Twitter underground follower market. In Privacy, Security and Trust (PST), 2015.Google ScholarGoogle Scholar
  2. F. Benevenuto, G. Magno, T. Rodrigues, and V. Almeida. Detecting spammers on twitter. In Collaboration, electronic messaging, anti-abuse and spam conference (CEAS), volume 6, page 12, 2010.Google ScholarGoogle Scholar
  3. Z. Chu, I. Widjaja, and H. Wang. Detecting social spam campaigns on twitter. In Applied Cryptography and Network Security, pages 455--472. Springer, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. B. Clark. The bot bubble. https://newrepublic.com/article/121551/bot-bubble-click-farms-have-inflated-social-media, April 2015.Google ScholarGoogle Scholar
  5. DailyMail. More than 2 million of hillary clinton's twitter followers are fake or never tweet. http://www.dailymail.co.uk/news/article-3038621/More-2-MILLION-Hillary-Clinton-s-Twitter-followers, April 2015.Google ScholarGoogle Scholar
  6. H. Gao, J. Hu, C. Wilson, Z. Li, Y. Chen, and B. Y. Zhao. Detecting and characterizing social spam campaigns. In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement, pages 35--47. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Kapravelos, C. Grier, N. Chachra, C. Kruegel, G. Vigna, and V. Paxson. Hulk: Eliciting malicious behavior in browser extensions. In Proceedings of the 23rd Usenix Security Symposium, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K. Lee, S. Webb, and H. Ge. Characterizing and automatically detecting crowdturfing in fiverr and twitter. Social Network Analysis and Mining, 5(1):1--16, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  9. E.-P. Lim, V.-A. Nguyen, N. Jindal, B. Liu, and H. W. Lauw. Detecting product review spammers using rating behaviors. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 939--948. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Microsoft. Trojan:js/febipos.a. www.microsoft.com/security/portal/threat/encyclopedia/entry.aspx?Name=Trojan:JS/Febipos.A, August 2013.Google ScholarGoogle Scholar
  11. Microsoft. Trojan:js/kilim.a. https://www.microsoft.com/security/portal/threat/encyclopedia/entry.aspx?Name=JS/Kilim, June 2013.Google ScholarGoogle Scholar
  12. M. Motoyama, D. McCoy, K. Levchenko, S. Savage, and G. M. Voelker. Dirty jobs: The role of freelance labor in web service abuse. In Proceedings of the 20th USENIX conference on Security, pages 14--14. USENIX Association, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. NYTimes. A rave, a pan, or just a fake? http://www.nytimes.com/2011/05/22/your-money/22haggler.html, May 2011.Google ScholarGoogle Scholar
  14. NYTimes. Fake twitter followers become multimillion-dollar business. http://bits.blogs.nytimes.com/2013/04/05/fake-twitter-followers-becomes-multimillion-dollar-business/, April 2013.Google ScholarGoogle Scholar
  15. NYTimes. All the product reviews money can buy. http://www.nytimes.com/2015/12/06/your-money/all-the-product-reviews-money-can-buy.html, December 2015.Google ScholarGoogle Scholar
  16. G. Stringhini, G. Wang, M. Egele, C. Kruegel, G. Vigna, H. Zheng, and B. Y. Zhao. Follow the green: growth and dynamics in twitter follower markets. In Proceedings of the 2013 conference on Internet measurement conference, pages 163--176. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Thomas, D. McCoy, C. Grier, A. Kolcz, and V. Paxson. Trafficking fraudulent accounts: The role of the underground market in twitter spam and abuse. In USENIX Security, pages 195--210. Citeseer, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. B. Viswanath, M. A. Bashir, M. Crovella, S. Guha, K. P. Gummadi, B. Krishnamurthy, and A. Mislove. Towards detecting anomalous user behavior in online social networks. In Proceedings of the 23rd USENIX Security Symposium (USENIX Security), 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. B. Viswanath, M. A. Bashir, M. B. Zafar, S. Bouget, S. Guha, K. P. Gummadi, A. Kate, and A. Mislove. Strength in numbers: Robust tamper detection in crowd computations. In Proceedings of the 2015 ACM on Conference on Online Social Networks, pages 113--124. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. G. Wang, T. Wang, H. Zheng, and B. Y. Zhao. Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers. In 23rd USENIX Security Symposium, USENIX Association, CA, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. G. Wang, C. Wilson, X. Zhao, Y. Zhu, M. Mohanlal, H. Zheng, and B. Y. Zhao. Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web, pages 679--688. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Detecting and Mitigating the Effect of Manipulated Reputation on Online Social Networks

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            WWW '16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web
            April 2016
            1094 pages
            ISBN:9781450341448

            Copyright © 2016 Copyright is held by the International World Wide Web Conference Committee (IW3C2)

            Publisher

            International World Wide Web Conferences Steering Committee

            Republic and Canton of Geneva, Switzerland

            Publication History

            • Published: 11 April 2016

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • abstract

            Acceptance Rates

            WWW '16 Companion Paper Acceptance Rate115of727submissions,16%Overall Acceptance Rate1,899of8,196submissions,23%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader