Skip to main content
Log in

Why and how to deceive: game results with sociological evidence

  • Original Article
  • Published:
Social Network Analysis and Mining Aims and scope Submit manuscript

Abstract

As social networking sites continue to proliferate, online deception is becoming a significant problem. Deceptive users are now not only lone wolves propagating hate messages and inappropriate content, but also are more frequently seemingly honest users choosing to deceive for selfish reasons. Their behavior negatively influences otherwise honest online community members, creating a snowball effect that damages entire online communities. In this paper, we study the phenomenon of deception and attempt to understand the dynamics of users’ deception, using a game-theoretic approach. We begin by formulating the decision process of a single user as a Markov chain with time-varying rewards. We then study the specific optimization problem a user may face in choosing to deceive when they are influenced by (1) their potential reward, (2) peer pressure and (3) their deception comfort level. We illustrate reasonable equilibria can be achieved under certain simplifying assumptions. We then investigate the inverse problem: given equilibria, we show how we can fit a model to the data and how this model exposes information about the social structure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The data does not allow us to establish whether a user is directly replying to another one within the same thread or not.

References

  • Gorsky P, Caspi A (2006) Online deception: Prevalence, motivation, and emotion. CyberPsychology and Behavior 9(1)

  • Ackerman MS, Cranor LF, Reagle J (1999) Privacy in e-commerce: examining user scenarios and privacy preferences. In: Proceedings of the 1st ACM Conference on Electronic Commerce, EC ’99. ACM, New York, pp 1–8

  • Agichtein E, Castillo C, Donato D, Gionis A, Mishne G (2008) Finding high-quality content in social media. In: Proceedings of the international conference on Web search and web data mining. ACM, pp 183–194

  • Barbier G, Liu H (2011) Information provenance in social media. In: Social Computing, Behavioral-Cultural Modeling and Prediction. Springer, New York, pp 276–283

  • Barnes SB (2006) A privacy paradox: social networking in the United States. First Monday 11(9):11–15

    Article  Google Scholar 

  • Baye M, Tian G, Zhou J (1993) Characterizations of the existence of equilibria in games with discontinuous and non-quasiconcave payoffs. Rev Econ Stud 60:935–948

    Article  MathSciNet  MATH  Google Scholar 

  • Bell J, Machover M (1977) A course in mathematical logic. North-Holland, Amsterdam

    MATH  Google Scholar 

  • Berger A, Pietra SD, Pietra VD (1996) A maximum entropy approach to natural language processing. Comput Linguist 22(1):39–71

    Google Scholar 

  • Bonneau J, Preibusch S (2010) The privacy jungle: on the market for data protection in social networks. In: Economics of information security and privacy. Springer, pp 121–167

  • Castillo C, Mendoza M, Poblete B (2011) Information credibility on Twitter. In: Proceedings of the 20th international conference on World wide web. ACM, New York, pp 675–684

  • Chung C, Pennebaker JW (2008) Revealing dimensions of thinking in open-ended self-descriptions: an automated meaning extraction method for natural language. J Res Pers 42(1):96–132

    Article  Google Scholar 

  • Cranor LF, Reagle J, Ackerman MS (2000) Beyond concern: understanding net users’ attitudes about online privacy. MIT Press, Cambridge

    Google Scholar 

  • David S, Pinch T (2006) Six degrees of reputation: the use and abuse of online review and recommendation systems. First Monday 11(3). http://journals.uic.edu/ojs/index.php/fm/article/view/1315

  • Donath J (2014) Signs, truth and design. Forthcoming. http://smg.media.mit.edu/people/judith/signalsTruthDesign.html

  • Fassim. Fassim: a forum spam prevention plugin. http://www.fassim.com/about/. Accessed Dec 2013

  • Giles J (2005) Internet encyclopaedias go head to head. Nature 438: 900–901

    Article  Google Scholar 

  • Goffman E (1959) The presentation of self in everyday life

  • Griffin C, Mercer D, Fan J, Squicciarini A (2012) Two species evolutionary game model of user and moderator dynamics. In Proceedings of ASE International Conference on Social Informatics, Washington, DC, USA, Dec 14–16

  • Griffin C, Squicciarini A (2012) Toward a game theoretic model of information release in social media with experimental results. In: Proceedings of 2nd Workshop on Semantic Computing and Security, San Francisco, CA, USA, May 28

  • Griffin C, Testa K, Racunas S (2011) An algorithm for searching an alternative hypothesis space. IEEE Trans Syst Man Cyber B 41(3):772–782

    Article  Google Scholar 

  • Griffin C, Kesidis G (2013) Good behavior in a communications system with cooperative, greedy, and vigilante players. CoRR, abs/1306.3127 (http://arxiv.org/abs/1306.3127)

  • Heffernan V (2006) The lonelygirl that really wasn’t. The New York Times. http://www.nytimes.com/2006/09/13/technology/13lonely.html?_r=2.

  • Michael D (2002) Intriligator. Mathematical Optimization and Economic Theory. SIAM Publishing

  • Patricia J (2012) The stranger among us: Identity deception in online communities of choice. Rutgers University

  • Johnson PE, Grazioli S, Jamal K, Berryman RG (2001) Detecting deception: adversarial problem solving in a low base-rate world. Cognitive Sci 25(3):355–392

    Article  Google Scholar 

  • Kesidis G, Tagpong A, Griffin C (2009) A sybil-proof referral system based on multiplicative reputation chains. IEEE Comm Lett 13(11):862–864

    Article  Google Scholar 

  • Kurve A, Kesidis G (2011) Sybil detection via distributed sparse cut monitoring. In: Proceedings of the IEEE International Conference on Communications. ICC’11, Kyoto, Japan, June

  • Lampe CAC, Ellison N, Steinfield C (2007) A familiar face(book): profile elements as signals in an online social network. In: Proceedings of the SIGCHI conference on Human factors in computing systems, CHI ’07. ACM, New York, pp 435–444

  • Manning CD, Schütze H (1999) Foundations of statistical natural language processing. MIT Press, Cambridge

    MATH  Google Scholar 

  • Moh TS, Murmann AJ (2010) Can you judge a man by his friends?-enhancing spammer detection on the twitter microblogging platform using friends and followers. Information Systems, Technology and Management, pp 210–220

  • Pennebaker JW, Mehl MR, Niederhoffer KG (2003) Psychological aspects of natural language use: our words, our selves. Ann Rev Psychol 54: 547-577

    Article  Google Scholar 

  • Reddit. http://reddit.com

  • Reny PJ (1999) On the existence of pure and mixed strategy nash equilibria in discontinuous games. Econometrica 67(5):1029–1056

    Article  MathSciNet  MATH  Google Scholar 

  • Santos E Jr, Johnson G Jr (2004) Toward detecting deception in intelligent systems. In: Defense and Security. International Society for Optics and Photonics, pp 130–141

  • Stop Forum SPam (2012) http://www.stopforumspam.com.

  • Squicciarini A, Griffin C (2012) An informed model of personal information release in social networking sites. In: 2012 ASE/IEEE Conference on Privacy, Security, Risk and Trust, Amsterdam, Netherlands, Sep 3–5. http://arxiv.org/abs/1206.0981

  • Squicciarini AC, Sundareswaran S, Griffin C (2011) A game theoretical perspective of users’ registration in online social platforms. In: Third IEEE International Conference on Privacy, Security, Risk and Trust. MIT, Cambridge, Oct 9–11

  • Strater K, Lipford HR (2008) Strategies and struggles with privacy in an online social networking community. In: Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction-Volume 1. British Computer Society, pp 111–119

  • Suler JR, Phillips WL (1998) The bad boys of cyberspace: deviant behavior in a multimedia chat community. Cyber Psychol Behav 1:275–294

    Article  Google Scholar 

  • Sureka A (2011) Mining user comment activity for detecting forum spammers in youtube. CoRR, abs/1103.5044

  • The Daily Dot (2011) Dot 10: The 10 most important people on reddit

  • The Register Uk (2001) Murderer confesses on anandtech forum. http://www.theregister.co.uk/2001/05/18/murderer_confesses_on_anandtech_forum/

  • Vartapetiance A, Gillam L (2012) I don’t know where he is not: does deception research yet offer a basis for deception detectives?. In: Proceedings of the Workshop on Computational Approaches to Deception Detection. Association for Computational Linguistics, pp 5–14

  • Wang AH (2010) Detecting spam bots in online social networking sites: a machine learning approach. In: Proceedings of the 24th annual IFIP WG 11.3 working conference on Data and applications security and privacy, DBSec’10. Springer-Verlag, Berlin, Heidelberg, pp 335–342

Download references

Acknowledgments

Portions of Dr. Griffin’s work were supported by the Army Research Office under Grant W911NF-11-1-0487. Portions of Dr. Griffin’s and Dr. Squicciarini’s work were supported by the Army Research Office under grant W911NF-13-1-0271.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Squicciarini.

Additional information

This article is part of the Topical Collection on Uncovering Deception in Social Media.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Squicciarini, A., Griffin, C. Why and how to deceive: game results with sociological evidence. Soc. Netw. Anal. Min. 4, 161 (2014). https://doi.org/10.1007/s13278-014-0161-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13278-014-0161-0

Keywords

Navigation