Abstract
Target Oriented Network Intelligence Collection (TONIC) is a problem which deals with acquiring maximum number of profiles in the online social network so as to maximize the information about a given target through these profiles. The acquired profiles, also known as leads in this paper, are expected to contain information which is relevant to the target profile.In the past, TONIC problem has been solved by modelling it as a Volatile Multi-arm bandit problem with stationary reward distribution. The problem with this approach is that the underlying reward distribution in case of TONIC changes with each exploration which needs to be incorporated for making future acquisitions. The paper shows a successful solution to the TONIC problem by modelling it as Volatile Bandit problem with non-stationary reward distributions. It illustrates a new approach and compares it’s performance with other algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Stern, R., Samama, L., Puzis, R., Beja, T., Bnaya, Z.: TONIC: Target oriented network intelligence collection for the social web. In: 27th AAAI Conference on Artificial Intelligence, pp. 1184–1190 (2013)
Samama-Kachko, L.: Target Oriented network intelligence collection (TONIC) (2014)
Samama-kachko, L., Stern, R., Felner, A.: Extended Framework for Target Oriented Network Intelligence Collection. In: (SoCS), pp. 131–138 (2014)
Bnaya, Z., Puzis, R., Stern, R., Felner, A.: Bandit algorithms for social network queries. In: Proceedings-SocialCom/PASSAT/BigData/EconCom/BioMedCom 2013, 148153 (2013)
Chakrabarti, D., Kumar, R., Radlinski, F., Upfal, E.: Mortal multi-armed bandits. In: Neural Information Processing Systems, pp. 273–280 (2008)
Bnaya, Z., Puzis, R., Stern, R., Felner, A.: Volatile multi-armed bandits for guaranteed targeted social crawling. In: Late Breaking Papers at the Twenty-Seventh AAAI Conference on Artificial Intelligence, pp. 8–10 (2013)
Auer, P.: Using confidence bounds for Exploration Exploitation trade-offs. JMLR 3, 397–422 (2002)
Auer, P., Cesa-Bianchi, N., Fischer, P.: Mach. Learn. 47, 235 (2002). https://doi.org/10.1023/A:1013689704352
Garivier, A., Moulines, E.: On upper-confidence bound policies for non-stationary bandit problems (2008). arXiv:0805.3415
Kocsis, L., Szepesvri, C.: Discounted UCB. In: 2nd PASCAL Challenges Workshop, pp. 784–791 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shaha, A., Arya, D., Tripathy, B.K. (2020). Implementation of Exploration in TONIC Using Non-stationary Volatile Multi-arm Bandits. In: Das, K., Bansal, J., Deep, K., Nagar, A., Pathipooranam, P., Naidu, R. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 1048. Springer, Singapore. https://doi.org/10.1007/978-981-15-0035-0_18
Download citation
DOI: https://doi.org/10.1007/978-981-15-0035-0_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0034-3
Online ISBN: 978-981-15-0035-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)