Skip to main content
Log in

Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Competitive influence maximization (CIM) is a key problem that seeks highly influential users to maximize the party’s reward than the competitor. Heuristic and game theory-based approaches are proposed to tackle the CIM problem. However, these approaches consider a selection of key influential users at the first round after knowing the competitor’s seed nodes. To overcome the first round seed selection, reinforcement learning (RL)-based models are proposed to tackle the competitive influence maximization allowing parties to select seed nodes in multiple rounds without explicitly knowing the competitor’s decision. Despite the successful application of RL-based models for CIM, the proposed RL-based models take extensive training time to train the model for finding an optimal strategy whenever the networks or settings of the agent change. To address the RL model’s efficiency, we extend transfer learning in reinforcement learning-based methods to reduce the training time and utilize the knowledge gained on a source network to a target network. Our objective is twofold; the first one is the appropriate state representation of the source and target networks to efficiently avail the knowledge gained on a source network to a target network. The second is to find an optimal transfer learning (TL) in the reinforcement learning method, which is more suitable to tackle the competitive influence maximization problem. We validate our proposed TL methods under two different settings of the agent. Experimental results demonstrate that our proposed TL methods achieve better or similar performance compared with the baseline model while reducing significant training time on target networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. https://graph-tool.skewed.de/static/doc/collection.html.

References

  1. Ali K, Wang CY, Chen YS (2018) Boosting reinforcement learning in competitive influence maximization with transfer learning. In: Proceedings of IEEE/WIC/ACM international conference on web intelligence (WI). IEEE, pp 395–400

  2. Ali K, Wang CY, Chen YS (2019) A novel nested q-learning method to tackle time-constrained competitive influence maximization. IEEE Access 7:6337–6352

    Article  Google Scholar 

  3. Ben-Ishay S, Sela A, Ben-Gal IE (2018) Spread-it: a strategic game of competitive diffusion through social networks. IEEE Trans Games 11(2):129–141

  4. Bharathi S, Kempe D, Salek M (2007) Competitive influence maximization in social networks. In: Proceedings of international workshop on web and internet economics. Springer, pp 306–311

  5. Borodin A, Filmus Y, Oren J (2010) Threshold models for competitive influence in social networks. In: Proceedings of international workshop on internet and network economics. Springer, pp 539–550

  6. Budak C, Agrawal D, El Abbadi A (2011) Limiting the spread of misinformation in social networks. In: Proceedings of the 20th international conference on World wide web. ACM, pp 665–674

  7. Chen HH, Ciou YB, Lin SD (2012) Information propagation game: a tool to acquire human playing data for multiplayer influence maximization on social networks. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1524–1527

  8. Chen W, Wang Y, Yang S (2009) Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 199–208

  9. Chen W, Yuan Y, Zhang L (2010) Scalable influence maximization in social networks under the linear threshold model. In: Proceedings of the 2010 IEEE international conference on data mining. IEEE, pp 88–97

  10. Chen W, Collins A, Cummings R, Ke T, Liu Z, Rincon D, Sun X, Wang Y, Wei W, Yuan Y (2011) Influence maximization in social networks when negative opinions may emerge and propagate. In: Proceedings of the 2011 SIAM international conference on data mining. SIAM, pp 379–390

  11. Cheng S, Shen H, Huang J, Zhang G, Cheng X (2013) Staticgreedy: solving the scalability-accuracy dilemma in influence maximization. In: Proceedings of the 22nd ACM international conference on Information and knowledge management. ACM, pp 509–518

  12. Clark A, Poovendran R (2011) Maximizing influence in competitive environments: a game-theoretic approach. In: Proceedings of international conference on decision and game theory for security. Springer, pp 151–162

  13. Cohen E, Delling D, Pajor T, Werneck RF (2014) Sketch-based influence maximization and computation: Scaling up with guarantees. In: Proceedings of the 23rd ACM international conference on conference on information and knowledge management. ACM, pp 629–638

  14. Domingos P, Richardson M (2001) Mining the network value of customers. In: Proceedings of the 7th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 57–66

  15. Even-Dar E, Mansour Y (2003) Learning rates for q-learning. J Mach Learn Res 5(1):1–25

    MathSciNet  MATH  Google Scholar 

  16. Fazeli A, Jadbabaie A (2012) Game theoretic analysis of a strategic model of competitive contagion and product adoption in social networks. In: Proceedings of 2012 IEEE 51st IEEE conference on decision and control (CDC). IEEE, pp 74–79

  17. Fernández F, Veloso M (2006) Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th international joint conference on autonomous agents and multiagent systems. ACM, pp 720–727

  18. Galhotra S, Arora A, Virinchi S, Roy S (2015) Asim: a scalable algorithm for influence maximization under the independent cascade model. In: Proceedings of the 24th international conference on world wide web. ACM, pp 35–36

  19. Galhotra S, Arora A, Roy S (2016) Holistic influence maximization: combining scalability and efficiency with opinion-aware models. In: Proceedings of the 2016 international conference on management of data. ACM, pp 743–758

  20. Goyal A, Lu W, Lakshmanan LV (2011a) Celf++: optimizing the greedy algorithm for influence maximization in social networks. In: Proceedings of the 20th international conference companion on world wide web. ACM, pp 47–48

  21. Goyal A, Lu W, Lakshmanan LV (2011b) Simpath: an efficient algorithm for influence maximization under the linear threshold model. In: Proceedings of 2011 IEEE 11th international conference on data mining. IEEE, pp 211–220

  22. He X, Song G, Chen W, Jiang Q (2012) Influence blocking maximization in social networks under the competitive linear threshold model. In: Proceedings of the 2012 SIAM international conference on data mining. SIAM, pp 463–474

  23. Jung K, Heo W, Chen W (2012) Irie: scalable and robust influence maximization in social networks. In: Proceedings of 2012 IEEE 12th international conference on data mining. IEEE, pp 918–923

  24. Kempe D, Kleinberg J, Tardos É (2003) Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 137–146

  25. Lazaric A, Restelli M, Bonarini A (2008) Transfer of samples in batch reinforcement learning. In: Proceedings of the 25th international conference on Machine learning. ACM, pp 544–551

  26. Leskovec J, Krevl A (2014) SNAP datasets: stanford large network dataset collection. http://snap.stanford.edu/data

  27. Leskovec J, Krause A, Guestrin C, Faloutsos C, VanBriesen J, Glance N (2007) Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 420–429

  28. Li H, Xu M, Bhowmick SS, Sun C, Jiang Z, Cui J (2019) Disco: influence maximization meets network embedding and deep learning. arXiv preprint arXiv:1906.07378

  29. Lin SC, Lin SD, Chen MS (2015) A learning-based framework to handle multi-round multi-party influence maximization on social networks. In: Proceedings of the 21st ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 695–704

  30. Madden MG, Howley T (2004) Transfer of experience between reinforcement learning environments with progressive difficulty. Artif Intell Rev 21(3):375–398

    Article  Google Scholar 

  31. Nguyen HT, Thai MT, Dinh TN (2016) Stop-and-stare: optimal sampling algorithms for viral marketing in billion-scale networks. In: Proceedings of the 2016 international conference on management of data. ACM, pp 695–710

  32. Qiu J, Tang J, Ma H, Dong Y, Wang K, Tang J (2018) Deepinf: social influence prediction with deep learning. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 2110–2119

  33. Richardson M, Domingos P (2002) Mining knowledge-sharing sites for viral marketing. In: Proceedings of the 8th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 61–70

  34. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT Press

  35. Tanaka F, Yamamura M (2003) Multitask reinforcement learning on the distribution of MDPS. In: Proceedings 2003 IEEE international symposium on computational intelligence in robotics and automation. Computational intelligence in robotics and automation for the new millennium (Cat. No. 03EX694), vol 3. IEEE, pp 1108–1113

  36. Tang Y, Xiao X, Shi Y (2014) Influence maximization: near-optimal time complexity meets practical efficiency. In: Proceedings of the 2014 ACM SIGMOD international conference on management of data. ACM, pp 75–86

  37. Tang Y, Shi Y, Xiao X (2015) Influence maximization in near-linear time: a martingale approach. In: Proceedings of the 2015 ACM SIGMOD international conference on management of data. ACM, pp 1539–1554

  38. Taylor M, Whiteson S, Stone P (2006) Transfer learning for policy search methods. In: ICML workshop on structural knowledge transfer for machine learning. Citeseer, pp 1–4

  39. Taylor ME, Stone P, Liu Y (2005) Value functions for RL-based behavior transfer: a comparative study. Proc AAAI 5:880–885

    Google Scholar 

  40. Torrey L, Shavlik J (2009) Transfer learning. Handb Res Mach Learn Appl Trends: Algorithms, Methods, Techn 1:242

    Google Scholar 

  41. Torrey L, Shavlik J, Walker T, Maclin R (2007) Relational macros for transfer in reinforcement learning. In: International conference on inductive logic programming. Springer, pp 254–268

  42. Torrey L, Shavlik J, Natarajan S, Kuppili P, Walker T (2008) Transfer in reinforcement learning via Markov logic networks. In: AAAI workshop on transfer learning for complex tasks

  43. Wilson A, Fern A, Ray S, Tadepalli P (2007) Multi-task reinforcement learning: a hierarchical Bayesian approach. In: Proceedings of the 24th international conference on Machine learning. ACM, pp 1015–1022

Download references

Acknowledgements

This work was supported by the Ministry of Science and Technology(MOST) Taiwan under Grants MOST 108-2628-E-001-003-MY3, MOST 110-2221-E-007-085-MY3, MOST 108-2221-E-007-064-MY3, and the Academia Sinica under Thematic Research Grant AS-TP-110-M07-2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Yu Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, K., Wang, CY. & Chen, YS. Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization. Knowl Inf Syst 64, 2059–2090 (2022). https://doi.org/10.1007/s10115-022-01696-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-022-01696-3

Keywords

Navigation