Skip to main content
Log in

Dynamic reputation information propagation based malicious account detection in OSNs

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

People all around the world have become increasingly dependent on online social networks (OSNs), meanwhile, the number of malicious accounts in OSNs is also rapidly growing. Traditional content-based data mining techniques and user graph-based methods are asking for more and more computing resources from networks providers, especially for the networks with huge and complicated network topologies. Moreover, traditional content-based analysis methods need to keep up with the times, which need to be retrained when the structure of users’ data changes or when the malicious contents come along with some pop cultures. With the purpose of reducing the dependence on network providers’ computing resources and improving the precision of detection, a novel detection method of malicious account, which bases on dynamic users’ reputation information propagation, is proposed in this paper. According to the comparison result of requesting user’s comprehensive reputation and malicious threshold, user can mark requesting user’s reputation so as to achieve the purpose of malicious account detection and providing indirect recommended reputation information about requesting user for other users. Through experiments with two real-world datasets and comparison with two typical efficient detection algorithms, this algorithm can effectively detect malicious accounts without the central detection system as well as improve the detection precision.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Abbreviations

\(R_{i\eta }\) :

Reputation of user \(\eta \) recorded by user i

\(V_{i\eta }^{D}\) :

Direct reputation vector of user \(\eta \)

\(V_{i\eta }^{I}\) :

Indirect reputation vector of user \(\eta \)

\(\sigma _{ij\_\eta }\) :

Variance of user \(\eta \)’s recommend reputation sent by user j

\(\sigma _{i\_\eta }\) :

Variance threshold of recommend reputation of user \(\eta \)

\(S^{T}_{i\_\eta }\) :

Set of recommend information on user \(\eta \) during period T

\(\alpha ^{D}_{ij}\) :

Counter of user j’s malicious recommend information

\(\beta ^{D}_{ij}\) :

Counter of user j’s real recommend information

\(\upsilon ^{D}_{ij}\) :

Counter of user j’s uncertain recommend information

\(\delta _{r}\) :

Threshold of recommend information that can be aggregated

\(M^{D}_{ij}\) :

Direct malicious factor

\(M^{I}_{ij}\) :

Indirect malicious factor

\(M^{C}_{ij}\) :

Comprehensive malicious factor

\(N^{D}_{ij}\) :

Direct normal factor

\(N^{I}_{ij}\) :

Indirect normal factor

\(N^{C}_{ij}\) :

Comprehensive normal factor

\(U^{D}_{ij}\) :

Direct uncertain factor

\(U^{I}_{ij}\) :

Indirect uncertain factor

\(U^{C}_{ij}\) :

Comprehensive uncertain factor

\(P_{m}\) :

Proportion of malicious users

\(\omega ^{D}\) :

Weight factor of direct observable information

\(\omega ^{I}\) :

Weight factor of indirect observable information

\(\gamma \) :

Characteristic of user’s preference for information

\(\kappa \) :

Relative parameter of prejudice against behavior

\(\delta _{m}\) :

Threshold that user can be regarded as malice

\(M_{\theta }\) :

Threshold of malice

H :

Set of users in online social networks

\(\sigma ^{N}_{ij\_\eta }\) :

Mean variance of normal user’s recommend reputation

\(\sigma ^{M}_{ij\_\eta }\) :

Mean variance of malicious user’s recommend reputation

\(P_{n}\) :

Proportion of normal users

References

  1. Ma, S., et al. (2017). Seeking powerful information initial spreaders in online social networks: A dense group perspective. Wireless Networks. https://doi.org/10.1007/s11276-017-1478-1.

    Article  Google Scholar 

  2. Statista (2018). Most popular social networks worldwide as of January 2018, ranked by number of active users (in millions). http://www.statista.com. Accessed 1 Jan 2018.

  3. Fire, M., et al. (2014). Friend or foe? Fake profile identification in online social networks. Social Network Analysis and Mining, 4(1), 194.

    Article  Google Scholar 

  4. Adewole, K. S., et al. (2017). Malicious accounts: Dark of the social networks. Journal of Network and Computer Applications, 79, 41–67.

    Article  Google Scholar 

  5. Yang, Z., et al. (2016). VoteTrust: Leveraging friend invitation graph to defend against social network sybils. IEEE Transactions on Dependable and Secure Computing, 13(4), 488–501.

    Article  Google Scholar 

  6. Miller, Z., et al. (2014). Twitter spammer detection using data stream clustering. Information Sciences, 260, 64–73.

    Article  Google Scholar 

  7. Cao, Q., et al. (2012). Aiding the detection of fake accounts in large scale social online services. In Proceedings of the 9th USENIX conference on networked systems design and implementation. USENIX Association.

  8. Wang, G., et al. (2012). Serf and turf: Crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web. ACM.

  9. Kincaid, J. (2010). EdgeRank: The secret sauce that makes Facebook’s news feed tick. TechCrunch, April.

  10. Emil Protalinski. Facebook immune system checks 25 billion actions every day. http://www.zdnet.com/article/facebook-immune-system-checks-25-billion-actions-every-day. Accessed 27 Oct 2011.

  11. Stein, T., Erdong, C., & Karan, M. (2011). Facebook immune system. In Proceedings of the 4th workshop on social network systems. ACM.

  12. Almaatouq, A., et al. (2016). If it looks like a spammer and behaves like a spammer, it must be a spammer: analysis and detection of microblogging spam accounts. International Journal of Information Security, 15(5), 475–491.

    Article  Google Scholar 

  13. Zheng, X., et al. (2015). Detecting spammers on social networks. Neurocomputing, 159, 27–34.

    Article  Google Scholar 

  14. Singh, M., Divya, B., & Sanjeev, S. (2014). Detecting malicious users in Twitter using classifiers. In Proceedings of the 7th international conference on security of information and networks. ACM.

  15. Mao, Y., & Shen, H. (2017). Web of credit: Adaptive personalized trust network inference from online rating data. IEEE Transactions on Computational Social Systems, 3(4), 176–189.

    Article  Google Scholar 

  16. Liu, D., et al. (2015). Community based spammer detection in social networks. In International conference on web-age information management. Springer, Cham.

  17. Sadan, Z., & Schwartz, D. G. (2011). Social network analysis of web links to eliminate false positives in collaborative anti-spam systems. Journal of Network and Computer Applications, 34(5), 1717–1723.

    Article  Google Scholar 

  18. Wang, G., et al. (2012). Social turing tests: Crowdsourcing sybil detection. arXiv preprint arXiv:1205.3856.

  19. Jia, W. U., Chen, Z., & Zhao, M. (2017). Effective information transmission based on socialization nodes in opportunistic networks. Computer Networks, 129, 297–305.

    Article  Google Scholar 

  20. Wu, J., Zhao, M., & Chen, Z. (2018). Small data: Effective data based on big communication research in social networks. Wireless Personal Communications, 99(3), 1391–1404.

    Article  Google Scholar 

  21. Wu, J., Chen, Z., & Zhao, M. (2018). Information cache management and data transmission algorithm in opportunistic social networks. Wireless Networks. https://doi.org/10.1007/s11276-018-1691-6.

    Article  Google Scholar 

  22. Zafarani, R., & Liu, H. (2009). Social computing data repository at ASU [http://socialcomputing.asu.edu]. Tempe, AZ: Arizona State University, School of Computing, Informatics and Decision Systems Engineering. Accessed 3 Feb 2009.

  23. The data source of ciao and epinions datasets. https://www.cse.msu.edu/~tangjili/trust.html. Accessed 1 May 2011.

  24. Mahtar, S. N. A. M., et al. (2017). Trust aware recommender system with distrust in different views of trusted users. Journal of Fundamental and Applied Sciences, 9(5S), 168–182.

    Article  Google Scholar 

  25. Guo, G., et al. (2017). Factored similarity models with social trust for top-N item recommendation. Knowledge-Based Systems, 122, 17–25.

    Article  Google Scholar 

  26. Shehnepoor, S., et al. (2017). NetSpam: A network-based spam detection framework for reviews in online social media. IEEE Transactions on Information Forensics and Security, 12(7), 1585–1595.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks for COMPSE 2018, Bangkok, Thailand, March 2018 Special Issue publication and thanks the referees’ and editors of this special issue.

Funding

This work was supported in part by Major Program of National Natural Science Foundation of China (71633006); The National Natural Science Foundation of China (61672540, 61379057); China Postdoctoral Science Foundation funded project (2017M612586); The Postdoctoral Science Foundation of Central South University (185684).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhigang Chen or Jia Wu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, H., Chen, Z. & Wu, J. Dynamic reputation information propagation based malicious account detection in OSNs. Wireless Netw 26, 4825–4838 (2020). https://doi.org/10.1007/s11276-018-1795-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-018-1795-z

Keywords

Navigation