Abstract
Relevance feedback is the most popular query reformulation strategy. However, clicking data as user’s feedback is not so reliable since the quality of a ranked result will influence the user’s feedback. An evaluation method called QR (quality of a ranked result) is proposed in this paper to tell how good a ranked result is. Then use the quality of current ranked result to predict the relevance of different feedbacks. In this way, better feedback document will play a more important role in the process of re-ranking. Experiments show that the QR measure is in direct proportion to DCG measure while QR needs no manual label. And the new re-ranking algorithm (QR-linear) outperforms the other two baseline algorithms especially when the number of feedback is large.
This work is supported by the key program of National Natural Science Foundation of China (60435020) and the NSFC Grant (60573166, 60603056).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review 18(2), 95–145 (2003)
Salton, G., Buckley, C.: Improving retrieval performance by relevance feedback. Journal of American Society of Information System 41(4), 288–297 (1990)
Robertson, S.E., Sparck Jones, K.: Relevance weighting of search terms. Journal of the American Society of Information Sciences 27(3), 129–146 (1976)
White, R., Jose, J., Ruthven, I.: Comparing explicit and implicit feedback techniques for web retrieval: Trec-10 interactive track report. In: Text Retrieval Conference(TREC) (2001)
Joachims, T., Granka, L., Pan, B.: Accurately interpreting clickthrough data as implicit feedback. In: Annual ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR), pp. 154–162 (2005)
Agichtein, E., Brill, E., Dumais, S., Ragno, R.: Learning User Interaction Models for Predicting Web Search Result Preferences. In: annual ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR), pp. 3–11 (2006)
Beitzel, S. M., Jensen, E. C., Chowdhury, A., Grossman, D., Frieder, O.: Hourly analysis of a very large topically categorized query log. In: Proceedings of SIGIR 2004, pp. 321–328 (2004)
Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of SIGKDD 2002 (2002)
White, R.W., Ruthven, I., Jose, J.M., Van Rijsbergen, C.J.: Evaluating implicit feedback models using searcher simulations. ACM Transactions on Information Systems (TOIS) (2005)
Kelly, D., Teevan, J.: Implicit Feedback for Inferring User Preference: A Bibliography. sigir forum (2003)
Shen, X., Tan, B., Zhai, C: Implicit user modeling for personalized search, CIKM (2005)
Oard, D., Jim, J.: Modeling information content using observable behavior. In: proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology (2001)
Voorhees, E.M.: Evaluation by highly relevant documents. In: 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA (2001)
Jarvelin, K., Kekalainen, J.: IR evaluation methods for retrieving highly relevant documents. In: proceedings of the 23rd Annual International ACM SIGIR conference on research and development in Information Retrieval, pp. 41–48 ( 2000)
TREC. http://trec.nist.gov
TianWang search engine. http://e.pku.edu.cn
Ide, E.: New experiments in relevance feedback. In: Salton, G. (ed.) The Smart Retrieval System, pp. 337–354. Prentice-Hall, Englewood Cliffs (1971)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gong, B., Peng, B., Li, X. (2007). A Personalized Re-ranking Algorithm Based on Relevance Feedback. In: Chang, K.CC., et al. Advances in Web and Network Technologies, and Information Management. APWeb WAIM 2007 2007. Lecture Notes in Computer Science, vol 4537. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72909-9_30
Download citation
DOI: https://doi.org/10.1007/978-3-540-72909-9_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72908-2
Online ISBN: 978-3-540-72909-9
eBook Packages: Computer ScienceComputer Science (R0)