skip to main content
10.1145/1390334.1390374acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

A study of methods for negative relevance feedback

Published: 20 July 2008 Publication History

Abstract

Negative relevance feedback is a special case of relevance feedback where we do not have any positive example; this often happens when the topic is difficult and the search results are poor. Although in principle any standard relevance feedback technique can be applied to negative relevance feedback, it may not perform well due to the lack of positive examples. In this paper, we conduct a systematic study of methods for negative relevance feedback. We compare a set of representative negative feedback methods, covering vector-space models and language models, as well as several special heuristics for negative feedback. Evaluating negative feedback methods requires a test set with sufficient difficult topics, but there are not many naturally difficult topics in the existing test collections. We use two sampling strategies to adapt a test collection with easy topics to evaluate negative feedback. Experiment results on several TREC collections show that language model based negative feedback methods are generally more effective than those based on vector-space models, and using multiple negative models is an effective heuristic for negative feedback. Our results also show that it is feasible to adapt test collections with easy topics for evaluating negative feedback methods through sampling.

References

[1]
R. Attar and A. S. Fraenkel. Local feedback in full-text retrieval systems. J. ACM, 24(3):397--417, 1977.
[2]
C. Buckley. Why current IR engines fail. In SIGIR'04, pages 584--585, 2004.
[3]
D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg. What makes a query difficult? In SIGIR, pages 390--397, 2006.
[4]
M. Dunlop. The effect of accessing non-matching documents on relevance feedback. ACM TOIS, 15(2), 1997.
[5]
D. A. Evans and R. G. Lefferts. Design and evaluation of the CLARIT TREC-2 system. In D. Harman, editor, TREC-2, pages 137--150, 1994.
[6]
D. Harman and C. Buckley. SIGIR 2004 Workshop: RIA and where can IR go from here. SIGIR Forum, 38(2):45--49, 2004.
[7]
M. A. Hearst and J. O. Pedersen. Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In SIGIR, pages 76--84, 1996.
[8]
M. Iwayama. Relevance feedback with a small number of relevance judgements: incremental relevance feedback vs. document clustering. In SIGIR, pages 10--16, 2000.
[9]
J. D. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In SIGIR, pages 111--119, 2001.
[10]
S. Robertson and K. Sparck Jones. Relevance weighting of search terms. Journal of the American Society for Information Science, 27:129--146, 1976.
[11]
S. E. Robertson, S. Walker, S. Jones, M. M.Hancock-Beaulieu, and M. Gatford. Okapi at TREC-3. In D. K. Harman, editor, The Third Text REtrieval Conference (TREC-3), pages 109--126, 1995.
[12]
J. J. Rocchio. Relevance feedback in information retrieval. In The SMART Retrieval System Experiments in Automatic Document Processing, pages 313--323, 1971.
[13]
G. Salton and C. Buckley. Improving retrieval performance by relevance feedback. JASIS, 44(4):288--297, 1990.
[14]
X. Shen, B. Tan, and C. Zhai. Context-sensitive information retrieval using implicit feedback. In Proceedings of SIGIR 2005, pages 43--50, 2005.
[15]
A. Singhal. Modern information retrieval: A brief overview. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, 24(4):35--43, 2001.
[16]
A. Singhal, M. Mitra, and C. Buckley. Learning routing queries in a query zone. In Proceedings of ACM SIGIR 1997.
[17]
E. M. Voorhees. Draft: Overview of the trec 2005 robust retrieval track. In Notebook of TREC2005, 2005.
[18]
E. M. Voorhees. Overview of the trec 2004 robust retrieval track. In TREC2004, 2005.
[19]
X. Wang, H. Fang, and C. Zhai. Improve retrieval accuracy for difficult queries using negative feedback. In CIKM, pages 991--994, 2007.
[20]
J. Xu and W. Croft. Query expansion using local and global document analysis. In Proceedings of ACM SIGIR 1996, pages 4--11.
[21]
E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow. Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In SIGIR, pages 512--519, 2005.
[22]
C. Zhai and J. Lafferty. Model-based feedback in the language modeling approach to information retrieval. In Proceedings of ACM CIKM 2001, pages 403--410.
[23]
C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of ACM SIGIR 2001, pages 334--342.

Cited By

View all
  • (2024)Simulating Conversational Search Users with Parameterized BehaviorProceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region10.1145/3673791.3698425(72-81)Online publication date: 8-Dec-2024
  • (2024)Enhancing Interactive Image Retrieval With Query Rewriting Using Large Language Models and Vision Language ModelsProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3658032(978-987)Online publication date: 30-May-2024
  • (2024)Content-Based Exclusion Queries in Keyword-Based Image RetrievalProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3657619(1145-1149)Online publication date: 30-May-2024
  • Show More Cited By

Index Terms

  1. A study of methods for negative relevance feedback

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
    July 2008
    934 pages
    ISBN:9781605581644
    DOI:10.1145/1390334
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 July 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. difficult topics
    2. language models
    3. negative feedback
    4. vector space models

    Qualifiers

    • Research-article

    Conference

    SIGIR '08
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)34
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Simulating Conversational Search Users with Parameterized BehaviorProceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region10.1145/3673791.3698425(72-81)Online publication date: 8-Dec-2024
    • (2024)Enhancing Interactive Image Retrieval With Query Rewriting Using Large Language Models and Vision Language ModelsProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3658032(978-987)Online publication date: 30-May-2024
    • (2024)Content-Based Exclusion Queries in Keyword-Based Image RetrievalProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3657619(1145-1149)Online publication date: 30-May-2024
    • (2023)Entity-Based Relevance Feedback for Document RetrievalProceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3578337.3605128(177-187)Online publication date: 9-Aug-2023
    • (2023)Boosting Feedback: A Framework for Enhancing Ground Truth Data Collection2023 IEEE International Conference on Big Data (BigData)10.1109/BigData59044.2023.10386658(1982-1989)Online publication date: 15-Dec-2023
    • (2022)Improving Personalized Search with Dual-Feedback NetworkProceedings of the Fifteenth ACM International Conference on Web Search and Data Mining10.1145/3488560.3498447(210-218)Online publication date: 11-Feb-2022
    • (2021)Asking Clarifying Questions Based on Negative Feedback in Conversational SearchProceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3471158.3472232(157-166)Online publication date: 11-Jul-2021
    • (2021)Impact of Interaction Strategies on User Relevance FeedbackProceedings of the 2021 International Conference on Multimedia Retrieval10.1145/3460426.3463663(590-598)Online publication date: 24-Aug-2021
    • (2021)Peer recommendation using negative relevance feedbackSādhanā10.1007/s12046-021-01763-546:4Online publication date: 23-Nov-2021
    • (2020)Guided TransformerProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3397271.3401061(1131-1140)Online publication date: 25-Jul-2020
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media