Abstract
Distributed search systems are an emerging phenomenon in Web search, in which independent topic-specific search engines provide search services, and metasearchers distribute user’s queries to only the most suitable search engines. Previous research has investigated methods for engine selection and merging of search results (i.e. performance improvements from the user’s perspective). We focus instead on performance from the service provider’s point of view (e.g, income from queries processed vs. resources used to answer them). We analyse a scenario in which individual search engines compete for user queries by choosing which documents (topics) to index. The challenge is that the utilities of an engine’s actions should depend on the uncertain actions of competitors. Thus, naive strategies (e.g, blindly indexing lots of popular documents) are ineffective. We model the competition between search engines as a stochastic game, and propose a reinforcement learning approach to managing search index contents. We evaluate our approach using a large log of user queries to 47 real search engines.
This research was supported by grant SFI/01/F.1/C015 from Science Foundation Ireland, and grant N00014-03-1-0274 from the US Office of Naval Research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Rubinstein, A.: Modelling Bounded Rationality. The MIT Press, Cambridge (1997)
Greenwald, A., Kephart, J., Tesauro, G.: Strategic pricebot dynamics. In: Proc. of the 1st ACM Conf. on Electronic Commerce, pp. 58–67 (1999)
Hu, J., Wellman, M.: Learning about other agents in a dynamic multiagent system. Cognitive Systems Research 2 (2001)
Risvik, K., Michelsen, R.: Search engines and Web dynamics. Computer Networks 39 (2002)
Gravano, L., Garcia-Molina, H.: GlOSS: Text-source discovery over the Internet. ACM Trans. on Database Systems 24, 229–264 (1999)
van Rijsbergen, C.J.: Information Retrieval, 2nd edn. Butterworths (1979)
Filar, J., Vrieze, K.: Competitive Markov Decision Processes. Springer, New York (1997)
Osborne, M., Rubinstein, A.: A Course in Game Theory. The MIT Press, Cambridge (1999)
Conitzer, V., Sandholm, T.: Complexity results about Nash equilibria. In: Proc. of the 18th Intl. Joint Conf. on AI (2003)
Peshkin, L., Meuleau, N., Kim, K.E., Kaelbling, L.: Learning to cooperate via policy search. In: Proc. of the 16th Conf. on Uncertainty in AI (2000)
Singh, S., Jaakkola, T., Jordan, M.: Learning without state-estimation in partially observable Markovian decision processes. In: Proc. of the 11th Intl. Conf. on Machine Learning (1994)
Peshkin, L.: Reinforcement learning by policy search. MIT AI Lab Technical Report 2003-003 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Khoussainov, R., Kushmerick, N. (2004). Distributed Web Search as a Stochastic Game. In: Callan, J., Crestani, F., Sanderson, M. (eds) Distributed Multimedia Information Retrieval. DIR 2003. Lecture Notes in Computer Science, vol 2924. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24610-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-24610-7_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20875-4
Online ISBN: 978-3-540-24610-7
eBook Packages: Springer Book Archive