Skip to main content

A Personalized Re-ranking Algorithm Based on Relevance Feedback

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4537))

Abstract

Relevance feedback is the most popular query reformulation strategy. However, clicking data as user’s feedback is not so reliable since the quality of a ranked result will influence the user’s feedback. An evaluation method called QR (quality of a ranked result) is proposed in this paper to tell how good a ranked result is. Then use the quality of current ranked result to predict the relevance of different feedbacks. In this way, better feedback document will play a more important role in the process of re-ranking. Experiments show that the QR measure is in direct proportion to DCG measure while QR needs no manual label. And the new re-ranking algorithm (QR-linear) outperforms the other two baseline algorithms especially when the number of feedback is large.

This work is supported by the key program of National Natural Science Foundation of China (60435020) and the NSFC Grant (60573166, 60603056).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review 18(2), 95–145 (2003)

    Article  Google Scholar 

  2. Salton, G., Buckley, C.: Improving retrieval performance by relevance feedback. Journal of American Society of Information System 41(4), 288–297 (1990)

    Article  Google Scholar 

  3. Robertson, S.E., Sparck Jones, K.: Relevance weighting of search terms. Journal of the American Society of Information Sciences 27(3), 129–146 (1976)

    Article  Google Scholar 

  4. White, R., Jose, J., Ruthven, I.: Comparing explicit and implicit feedback techniques for web retrieval: Trec-10 interactive track report. In: Text Retrieval Conference(TREC) (2001)

    Google Scholar 

  5. Joachims, T., Granka, L., Pan, B.: Accurately interpreting clickthrough data as implicit feedback. In: Annual ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR), pp. 154–162 (2005)

    Google Scholar 

  6. Agichtein, E., Brill, E., Dumais, S., Ragno, R.: Learning User Interaction Models for Predicting Web Search Result Preferences. In: annual ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR), pp. 3–11 (2006)

    Google Scholar 

  7. Beitzel, S. M., Jensen, E. C., Chowdhury, A., Grossman, D., Frieder, O.: Hourly analysis of a very large topically categorized query log. In: Proceedings of SIGIR 2004, pp. 321–328 (2004)

    Google Scholar 

  8. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of SIGKDD 2002 (2002)

    Google Scholar 

  9. White, R.W., Ruthven, I., Jose, J.M., Van Rijsbergen, C.J.: Evaluating implicit feedback models using searcher simulations. ACM Transactions on Information Systems (TOIS) (2005)

    Google Scholar 

  10. Kelly, D., Teevan, J.: Implicit Feedback for Inferring User Preference: A Bibliography. sigir forum (2003)

    Google Scholar 

  11. Shen, X., Tan, B., Zhai, C: Implicit user modeling for personalized search, CIKM (2005)

    Google Scholar 

  12. Oard, D., Jim, J.: Modeling information content using observable behavior. In: proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology (2001)

    Google Scholar 

  13. Voorhees, E.M.: Evaluation by highly relevant documents. In: 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA (2001)

    Google Scholar 

  14. Jarvelin, K., Kekalainen, J.: IR evaluation methods for retrieving highly relevant documents. In: proceedings of the 23rd Annual International ACM SIGIR conference on research and development in Information Retrieval, pp. 41–48 ( 2000)

    Google Scholar 

  15. TREC. http://trec.nist.gov

  16. TianWang search engine. http://e.pku.edu.cn

  17. Ide, E.: New experiments in relevance feedback. In: Salton, G. (ed.) The Smart Retrieval System, pp. 337–354. Prentice-Hall, Englewood Cliffs (1971)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Kevin Chen-Chuan Chang Wei Wang Lei Chen Clarence A. Ellis Ching-Hsien Hsu Ah Chung Tsoi Haixun Wang

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gong, B., Peng, B., Li, X. (2007). A Personalized Re-ranking Algorithm Based on Relevance Feedback. In: Chang, K.CC., et al. Advances in Web and Network Technologies, and Information Management. APWeb WAIM 2007 2007. Lecture Notes in Computer Science, vol 4537. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72909-9_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72909-9_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72908-2

  • Online ISBN: 978-3-540-72909-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics