skip to main content
10.1145/2348283.2348289acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Adaptive query suggestion for difficult queries

Published:12 August 2012Publication History

ABSTRACT

Query suggestion is a useful tool to help users formulate better queries. Although this has been found highly useful globally, its effect on different queries may vary. In this paper, we examine the impact of query suggestion on queries of different degrees of difficulty. It turns out that query suggestion is much more useful for difficult queries than easy queries. In addition, the suggestions for difficult queries should rely less on their similarity to the original query. In this paper, we use a learning-to-rank approach to select query suggestions, based on several types of features including a query performance prediction. As query suggestion has different impacts on different queries, we propose an adaptive suggestion approach that makes suggestions only for difficult queries. We carry out experiments on real data from a search engine. Our results clearly indicate that an approach targeting difficult queries can bring higher gain than a uniform suggestion approach.

References

  1. G. Amati, C. Carpineto, and G. Romano. Query difficulty, robustness, and selective application of query expansion. In ECIR, pages 127--137, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  2. R. A. Baeza-Yates, C. A. Hurtado, and M. Mendoza. Query recommendation using query logs in search engines. In EDBT Workshops, pages 588--596, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. R. A. Baeza-Yates and A. Tiberi. Extracting semantic relations from query logs. In KDD, pages 76--85, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. N. Balasubramanian, G. Kumaran, and V. R. Carvalho. Exploring reductions for long web queries. In SIGIR, pages 571--578, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. N. Balasubramanian, G. Kumaran, and V. R. Carvalho. Predicting query performance on the web. In SIGIR, pages 785--786, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Beeferman and A. L. Berger. Agglomerative clustering of a search engine query log. In KDD, pages 407--416, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Bhatia, D. Majumdar, and P. Mitra. Query suggestions in the absence of query logs. In SIGIR, pages 795--804, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. H. Cao, D. Jiang, J. Pei, Q. He, Z. Liao, E. Chen, and H. Li. Context-aware query suggestion by mining click-through and session data. In KDD, pages 875--883, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Carmel and E. Yom-Tov. Estimating the query difficulty for information retrieval. In SIGIR, page 911, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg. What makes a query difficult? In SIGIR, pages 390--397, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. In SIGIR, pages 299--306, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. V. Dang and W. B. Croft. Query reformulation using anchor text. In WSDM, pages 41--50, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Eiron and K. S. McCurley. Analysis of anchor text for web search. In SIGIR, pages 459--460, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. C. Fox. Lexical analysis and stoplists. Information Retrieval - Data Structures & Algorithms, pages 102--130, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Gao, X. Li, D. Micol, C. Quirk, and X. Sun. A large scale ranker-based system for search query spelling correction. In COLING, pages 358--366, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. Guo, G. Xu, H. Li, and X. Cheng. A unified and discriminative model for query refinement. In SIGIR, pages 379--386, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422--446, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. J.C.Borda. Memoire sur les elections au scrutin. Histoire de l'Academie Royale des Sciences, 1781.Google ScholarGoogle Scholar
  19. T. Joachims. Optimizing search engines using clickthrough data. In KDD, pages 133--142, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Jones, B. Rey, O. Madani, and W. Greiner. Generating query substitutions. In WWW, pages 387--396, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Kraft and J. Y. Zien. Mining anchor text for query refinement. In WWW, pages 666--674, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. V. Lavrenko and W. B. Croft. Relevance-based language models. In SIGIR, pages 120--127, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. H. Ma, H. Yang, I. King, and M. R. Lyu. Learning latent semantic relations from clickthrough data for query suggestion. In CIKM, pages 709--718, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Q. Mei, D. Zhou, and K. W. Church. Query suggestion using hitting time. In CIKM, pages 469--478, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. F. Peng, N. Ahmed, X. Li, and Y. Lu. Context sensitive stemming for web search. In SIGIR, pages 639--646, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Porter. An algorithm for suffix stripping. Program: electronic library and information systems, 14(3):130--137, 1980.Google ScholarGoogle Scholar
  27. S. E. Robertson. The probability ranking principle in ir. Journal of Documentation, 33:130--137, 1977.Google ScholarGoogle ScholarCross RefCross Ref
  28. Y. Song and L. wei He. Optimal rare query suggestion with implicit user feedback. In WWW, pages 901--910, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Y. Song, D. Zhou, and L. wei He. Post-ranking query suggestion by diversifying search results. In SIGIR, pages 815--824, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. T. Tao and C. Zhai. Regularized estimation of mixture models for robust pseudo-relevance feedback. In SIGIR, pages 162--169, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. X. Wang and C. Zhai. Mining term association patterns from search logs for effective query reformulation. In CIKM, pages 479--488, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. J.-R. Wen, J.-Y. Nie, and H. Zhang. Clustering user queries of a search engine. In WWW, pages 162--168, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. J. Xu and W. B. Croft. Improving the effectiveness of information retrieval with local context analysis. ACM Trans. Inf. Syst., 18(1):79--112, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow. Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In SIGIR, pages 512--519, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Y. Zhou and W. B. Croft. Ranking robustness: a novel framework to predict query performance. In CIKM, pages 567--574, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Y. Zhou and W. B. Croft. Query performance prediction in web search environments. In SIGIR, pages 543--550, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Adaptive query suggestion for difficult queries

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '12: Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
      August 2012
      1236 pages
      ISBN:9781450314725
      DOI:10.1145/2348283

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 August 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader