Abstract
Competitive influence maximization (CIM) is a key problem that seeks highly influential users to maximize the party’s reward than the competitor. Heuristic and game theory-based approaches are proposed to tackle the CIM problem. However, these approaches consider a selection of key influential users at the first round after knowing the competitor’s seed nodes. To overcome the first round seed selection, reinforcement learning (RL)-based models are proposed to tackle the competitive influence maximization allowing parties to select seed nodes in multiple rounds without explicitly knowing the competitor’s decision. Despite the successful application of RL-based models for CIM, the proposed RL-based models take extensive training time to train the model for finding an optimal strategy whenever the networks or settings of the agent change. To address the RL model’s efficiency, we extend transfer learning in reinforcement learning-based methods to reduce the training time and utilize the knowledge gained on a source network to a target network. Our objective is twofold; the first one is the appropriate state representation of the source and target networks to efficiently avail the knowledge gained on a source network to a target network. The second is to find an optimal transfer learning (TL) in the reinforcement learning method, which is more suitable to tackle the competitive influence maximization problem. We validate our proposed TL methods under two different settings of the agent. Experimental results demonstrate that our proposed TL methods achieve better or similar performance compared with the baseline model while reducing significant training time on target networks.
Similar content being viewed by others
References
Ali K, Wang CY, Chen YS (2018) Boosting reinforcement learning in competitive influence maximization with transfer learning. In: Proceedings of IEEE/WIC/ACM international conference on web intelligence (WI). IEEE, pp 395–400
Ali K, Wang CY, Chen YS (2019) A novel nested q-learning method to tackle time-constrained competitive influence maximization. IEEE Access 7:6337–6352
Ben-Ishay S, Sela A, Ben-Gal IE (2018) Spread-it: a strategic game of competitive diffusion through social networks. IEEE Trans Games 11(2):129–141
Bharathi S, Kempe D, Salek M (2007) Competitive influence maximization in social networks. In: Proceedings of international workshop on web and internet economics. Springer, pp 306–311
Borodin A, Filmus Y, Oren J (2010) Threshold models for competitive influence in social networks. In: Proceedings of international workshop on internet and network economics. Springer, pp 539–550
Budak C, Agrawal D, El Abbadi A (2011) Limiting the spread of misinformation in social networks. In: Proceedings of the 20th international conference on World wide web. ACM, pp 665–674
Chen HH, Ciou YB, Lin SD (2012) Information propagation game: a tool to acquire human playing data for multiplayer influence maximization on social networks. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1524–1527
Chen W, Wang Y, Yang S (2009) Efficient influence maximization in social networks. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 199–208
Chen W, Yuan Y, Zhang L (2010) Scalable influence maximization in social networks under the linear threshold model. In: Proceedings of the 2010 IEEE international conference on data mining. IEEE, pp 88–97
Chen W, Collins A, Cummings R, Ke T, Liu Z, Rincon D, Sun X, Wang Y, Wei W, Yuan Y (2011) Influence maximization in social networks when negative opinions may emerge and propagate. In: Proceedings of the 2011 SIAM international conference on data mining. SIAM, pp 379–390
Cheng S, Shen H, Huang J, Zhang G, Cheng X (2013) Staticgreedy: solving the scalability-accuracy dilemma in influence maximization. In: Proceedings of the 22nd ACM international conference on Information and knowledge management. ACM, pp 509–518
Clark A, Poovendran R (2011) Maximizing influence in competitive environments: a game-theoretic approach. In: Proceedings of international conference on decision and game theory for security. Springer, pp 151–162
Cohen E, Delling D, Pajor T, Werneck RF (2014) Sketch-based influence maximization and computation: Scaling up with guarantees. In: Proceedings of the 23rd ACM international conference on conference on information and knowledge management. ACM, pp 629–638
Domingos P, Richardson M (2001) Mining the network value of customers. In: Proceedings of the 7th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 57–66
Even-Dar E, Mansour Y (2003) Learning rates for q-learning. J Mach Learn Res 5(1):1–25
Fazeli A, Jadbabaie A (2012) Game theoretic analysis of a strategic model of competitive contagion and product adoption in social networks. In: Proceedings of 2012 IEEE 51st IEEE conference on decision and control (CDC). IEEE, pp 74–79
Fernández F, Veloso M (2006) Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th international joint conference on autonomous agents and multiagent systems. ACM, pp 720–727
Galhotra S, Arora A, Virinchi S, Roy S (2015) Asim: a scalable algorithm for influence maximization under the independent cascade model. In: Proceedings of the 24th international conference on world wide web. ACM, pp 35–36
Galhotra S, Arora A, Roy S (2016) Holistic influence maximization: combining scalability and efficiency with opinion-aware models. In: Proceedings of the 2016 international conference on management of data. ACM, pp 743–758
Goyal A, Lu W, Lakshmanan LV (2011a) Celf++: optimizing the greedy algorithm for influence maximization in social networks. In: Proceedings of the 20th international conference companion on world wide web. ACM, pp 47–48
Goyal A, Lu W, Lakshmanan LV (2011b) Simpath: an efficient algorithm for influence maximization under the linear threshold model. In: Proceedings of 2011 IEEE 11th international conference on data mining. IEEE, pp 211–220
He X, Song G, Chen W, Jiang Q (2012) Influence blocking maximization in social networks under the competitive linear threshold model. In: Proceedings of the 2012 SIAM international conference on data mining. SIAM, pp 463–474
Jung K, Heo W, Chen W (2012) Irie: scalable and robust influence maximization in social networks. In: Proceedings of 2012 IEEE 12th international conference on data mining. IEEE, pp 918–923
Kempe D, Kleinberg J, Tardos É (2003) Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 137–146
Lazaric A, Restelli M, Bonarini A (2008) Transfer of samples in batch reinforcement learning. In: Proceedings of the 25th international conference on Machine learning. ACM, pp 544–551
Leskovec J, Krevl A (2014) SNAP datasets: stanford large network dataset collection. http://snap.stanford.edu/data
Leskovec J, Krause A, Guestrin C, Faloutsos C, VanBriesen J, Glance N (2007) Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 420–429
Li H, Xu M, Bhowmick SS, Sun C, Jiang Z, Cui J (2019) Disco: influence maximization meets network embedding and deep learning. arXiv preprint arXiv:1906.07378
Lin SC, Lin SD, Chen MS (2015) A learning-based framework to handle multi-round multi-party influence maximization on social networks. In: Proceedings of the 21st ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 695–704
Madden MG, Howley T (2004) Transfer of experience between reinforcement learning environments with progressive difficulty. Artif Intell Rev 21(3):375–398
Nguyen HT, Thai MT, Dinh TN (2016) Stop-and-stare: optimal sampling algorithms for viral marketing in billion-scale networks. In: Proceedings of the 2016 international conference on management of data. ACM, pp 695–710
Qiu J, Tang J, Ma H, Dong Y, Wang K, Tang J (2018) Deepinf: social influence prediction with deep learning. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 2110–2119
Richardson M, Domingos P (2002) Mining knowledge-sharing sites for viral marketing. In: Proceedings of the 8th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 61–70
Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. MIT Press
Tanaka F, Yamamura M (2003) Multitask reinforcement learning on the distribution of MDPS. In: Proceedings 2003 IEEE international symposium on computational intelligence in robotics and automation. Computational intelligence in robotics and automation for the new millennium (Cat. No. 03EX694), vol 3. IEEE, pp 1108–1113
Tang Y, Xiao X, Shi Y (2014) Influence maximization: near-optimal time complexity meets practical efficiency. In: Proceedings of the 2014 ACM SIGMOD international conference on management of data. ACM, pp 75–86
Tang Y, Shi Y, Xiao X (2015) Influence maximization in near-linear time: a martingale approach. In: Proceedings of the 2015 ACM SIGMOD international conference on management of data. ACM, pp 1539–1554
Taylor M, Whiteson S, Stone P (2006) Transfer learning for policy search methods. In: ICML workshop on structural knowledge transfer for machine learning. Citeseer, pp 1–4
Taylor ME, Stone P, Liu Y (2005) Value functions for RL-based behavior transfer: a comparative study. Proc AAAI 5:880–885
Torrey L, Shavlik J (2009) Transfer learning. Handb Res Mach Learn Appl Trends: Algorithms, Methods, Techn 1:242
Torrey L, Shavlik J, Walker T, Maclin R (2007) Relational macros for transfer in reinforcement learning. In: International conference on inductive logic programming. Springer, pp 254–268
Torrey L, Shavlik J, Natarajan S, Kuppili P, Walker T (2008) Transfer in reinforcement learning via Markov logic networks. In: AAAI workshop on transfer learning for complex tasks
Wilson A, Fern A, Ray S, Tadepalli P (2007) Multi-task reinforcement learning: a hierarchical Bayesian approach. In: Proceedings of the 24th international conference on Machine learning. ACM, pp 1015–1022
Acknowledgements
This work was supported by the Ministry of Science and Technology(MOST) Taiwan under Grants MOST 108-2628-E-001-003-MY3, MOST 110-2221-E-007-085-MY3, MOST 108-2221-E-007-064-MY3, and the Academia Sinica under Thematic Research Grant AS-TP-110-M07-2.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ali, K., Wang, CY. & Chen, YS. Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization. Knowl Inf Syst 64, 2059–2090 (2022). https://doi.org/10.1007/s10115-022-01696-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10115-022-01696-3