skip to main content
research-article

Component-based Analysis of Dynamic Search Performance

Published:22 November 2021Publication History
Skip Abstract Section

Abstract

In many search scenarios, such as exploratory, comparative, or survey-oriented search, users interact with dynamic search systems to satisfy multi-aspect information needs. These systems utilize different dynamic approaches that exploit various user feedback granularity types. Although studies have provided insights about the role of many components of these systems, they used black-box and isolated experimental setups. Therefore, the effects of these components or their interactions are still not well understood. We address this by following a methodology based on Analysis of Variance (ANOVA). We built a Grid Of Points that consists of systems based on different ways to instantiate three components: initial rankers, dynamic rerankers, and user feedback granularity. Using evaluation scores based on the TREC Dynamic Domain collections, we built several ANOVA models to estimate the effects. We found that (i) although all components significantly affect search effectiveness, the initial ranker has the largest effective size, (ii) the effect sizes of these components vary based on the length of the search session and the used effectiveness metric, and (iii) initial rankers and dynamic rerankers have more prominent effects than user feedback granularity. To improve effectiveness, we recommend improving the quality of initial rankers and dynamic rerankers. This does not require eliciting detailed user feedback, which might be expensive or invasive.

REFERENCES

  1. [1] Aggarwal Charu C.. 2016. Recommender Systems: The Textbook (1st ed.). Springer Publishing Company, Incorporated, Cham. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Agrawal Rakesh, Gollapudi Sreenivas, Halverson Alan, and Ieong Samuel. 2009. Diversifying search results. In Proc. WSDM. ACM, New York, NY, 514. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Akcay Mehmet, Altingovde Ismail Sengor, Macdonald Craig, and Ounis Iadh. 2017. On the additivity and weak baselines for search result diversification research. In Proc. ICTIR. ACM, New York, NY, 109116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Albahem Ameer, Spina Damiano, Scholer Falk, and Cavedon Lawrence. 2019. Meta-evaluation of dynamic search: How do metrics capture topical relevance, diversity and user effort? In Proc. ECIR. Springer International Publishing, Cham, 607620.Google ScholarGoogle Scholar
  5. [5] Albahem Ameer, Spina Damiano, Scholer Falk, Moffat Alistair, and Cavedon Lawrence. 2018. Desirable properties for diversity and truncated effectiveness metrics. In Proc. Aust. Doc. Comp. Symp. ACM, New York, NY, 9:1–9:7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Amati Gianni and Rijsbergen Cornelis Joost Van. 2002. Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Trans. Inf. Sys. 20, 4 (Oct. 2002), 357389. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Armstrong Timothy G., Moffat Alistair, Webber William, and Zobel Justin. 2009. Improvements that don’t add up: Ad-hoc retrieval results since 1998. In Proc. CIKM. ACM, New York, NY, 601610. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Banks David, Over Paul, and Zhang Nien-Fan. 1999. Blind men and elephants: Six approaches to TREC data. Inf. Retr. 1, 1 (1999), 734. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Bo Zhou, Fang Qi, Cen Rongwei, Zhang Min, Liu Yiqun, and Ma Shaoping. 2008. THUIR at TREC2008: Relevance feedback track1. In Proc. TREC, Vol. 500-277. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  10. [10] Brondwine Elinor, Shtok Anna, and Khurland Oren. 2016. Utilizing focused relevance feedback. In Proc. SIGIR. ACM, New York, NY, 10611064. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Buckley Chris and Robertson Robertson. 2008. Relevance feedback track overview: TREC 2008. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  12. [12] Carbonell Jaime and Goldstein Jade. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proc. SIGIR. ACM, New York, NY, 335336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Carterette Ben, Kanoulas Evangelos, Hall Mark, and Clough Paul. 2014. Overview of the TREC 2014 session track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  14. [14] Carterette Benjamin A.. 2012. Multiple testing in statistical analysis of systems-based information retrieval experiments. ACM Trans. Inf. Syst. 30, 1, Article 4 (Mar. 2012), 34 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Chapelle Olivier, Metlzer Donald, Zhang Ya, and Grinspan Pierre. 2009. Expected reciprocal rank for graded relevance. In Proc. CIKM. ACM, New York, NY, 621630. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Chen Jia, Mao Jiaxin, Liu Yiqun, Zhang Min, and Ma Shaoping. 2019. TianGong-ST: A new dataset with large-scale refined real-world web search sessions. In Proc. CIKM. ACM, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Chen Limin, Tang Zhiwen, and Yang Grace Hui. 2020. Balancing reinforcement learning training experiences in interactive information retrieval. In Proc. SIGIR. ACM, New York, NY, 15251528. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Clarke Charles L. A., Kolla Maheedhar, Cormack Gordon V., Vechtomova Olga, Ashkan Azin, Büttcher Stefan, and MacKinnon Ian. 2008. Novelty and diversity in information retrieval evaluation. In Proc. SIGIR. ACM, New York, NY, 659666. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Clarke Charles L. A., Craswell Nick, and Voorhees Ellen M.. 2012. Overview of the TREC 2012 web track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  20. [20] Craswell Nick, Mitra Bhaskar, Yilmaz Emine, Campos Daniel, and Lin Jimmy. 2021. MS MARCO: Benchmarking ranking models in the large-data regime. In Proc. SIGIR. ACM, New York, NY, 15661576. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Dang Van and Croft W. Bruce. 2012. Diversity by proportionality: An election-based approach to search result diversification. In Proc. SIGIR. ACM, New York, NY, 6574. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Dang Van and Croft W. Bruce. 2013. Term level search result diversification. In Proc. SIGIR. ACM, New York, NY, 603612. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Faggioli Guglielmo and Ferro Nicola. 2021. System effect estimation by sharding: A comparison between ANOVA approaches to detect significant differences. In Proc. ECIR. Springer International Publishing, Cham, 3346.Google ScholarGoogle Scholar
  24. [24] Ferrante Marco, Ferro Nicola, and Fuhr Norbert. 2021. Towards Meaningful Statements in IR Evaluation. Mapping Evaluation Measures to Interval Scales. arxiv:2101.02668 [cs.IR]. Retrieved from https://arxiv.org/abs/2101.02668.Google ScholarGoogle Scholar
  25. [25] Ferro Nicola and Sanderson Mark. 2019. Improving the accuracy of system performance estimation by using shards. In Proc. SIGIR. ACM, New York, NY, 805814. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Ferro Nicola and Silvello Gianmaria. 2016. A general linear mixed models approach to study system component effects. In Proc. SIGIR. ACM, New York, NY, 2534. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Ferro Nicola and Silvello Gianmaria. 2018. Toward an anatomy of IR system component performances. J. Assoc. Inf. Sci. Technol. 69, 2 (2018), 187200.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Gao Ning, Deng Zhi-Hong, Jiang Jia-Jian, Lv Sheng-Long, and Yu Hang. 2011. Combining strategies for XML retrieval. In Comparative Evaluation of Focused Retrieval, Geva Shlomo, Kamps Jaap, Schenkel Ralf, and Trotman Andrew (Eds.). Springer, Berlin , 319331. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Golovchinsky Gene, Price Morgan N., and Schilit Bill N.. 1999. From reading to retrieval: Freeform ink annotations as queries. In Proc. SIGIR. ACM, New York, NY, 1925. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Harman Donna and Buckley Chris. 2009. Overview of the reliable information access workshop. Inf. Retr. 12, 6 (2009), 615641. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Hu Sha, Dou Zhicheng, Wang Xiaojie, Sakai Tetsuya, and Wen Ji-Rong. 2015. Search result diversification based on hierarchical intents. In Proc. CIKM. ACM, New York, NY, 6372. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Ito P. K.. 1980. 7 robustness of ANOVA and MANOVA test procedures. In Analysis of Variance. Handbook of Statistics, Vol. 1. Elsevier, Amsterdam, 199236.Google ScholarGoogle Scholar
  33. [33] Jiang Jiepu, He Daqing, and Allan James. 2017. Comparing in situ and multidimensional relevance judgments. In Proc. SIGIR. ACM, New York, NY, 405414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Jiang Zhengbao, Wen Ji-Rong, Dou Zhicheng, Zhao Wayne Xin, Nie Jian-Yun, and Yue Ming. 2017. Learning to diversify search results via subtopic attention. In Proc. SIGIR. ACM, New York, NY, 545554. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Jin Xiaoran, Sloan Marc, and Wang Jun. 2013. Interactive exploratory search for multi page search results. In Proc. WWW. ACM, New York, NY, 655666. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Jiyun Luo and Yang Hui. 2015. Re-ranking via user feedback: Georgetown university at TREC 2015 DD track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  37. [37] Joganah Robin, Khoury Richard, and Lamontagne Luc. 2015. Laval university and lakehead university at TREC dynamic domain 2015: Combination of techniques for subtopics coverage. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  38. [38] Kanoulas Evangelos, Carterette Ben, Clough Paul D., and Sanderson Mark. 2011. Evaluating multi-query sessions. In Proc. SIGIR (Beijing, China). ACM, New York, NY, 10531062. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Kharazmi Sadegh, Scholer Falk, Vallet David, and Sanderson Mark. 2016. Examining additivity and weak baselines. ACM Trans. Inf. Sys. 34, 4, Article 23 (June 2016), 18 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Lavrenko Victor and Croft W. Bruce. 2001. Relevance based language models. In Proc. SIGIR. ACM, New York, NY, 120127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Leuski Anton. 2001. Evaluating document clustering for interactive information retrieval. In Proc. CIKM. ACM, New York, NY, 3340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Levine Nir, Roitman Haggai, and Cohen Doron. 2017. An extended relevance model for session search. In Proc. SIGIR. ACM, New York, NY, 865868. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Li Cheng, Resnick Paul, and Mei Qiaozhu. 2016. Multiple queries as bandit arms. In Proc. CIKM. ACM, New York, NY, 10891098. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Lin Jimmy, Campos Daniel, Craswell Nick, Mitra Bhaskar, and Yilmaz Emine. 2021. Significant improvements over the state of the art? A case study of the MS MARCO document ranking leaderboard. In Proc. SIGIR. ACM, New York, NY, 22832287. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Liu Jiongnan, Dou Zhicheng, Wang Xiaojie, Lu Shuqi, and Wen Ji-Rong. 2020. DVGAN: A minimax game for search result diversification combining explicit and implicit features. In Proc. SIGIR. ACM, New York, NY, 479488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Lu Xiaolu, Kurland Oren, Culpepper J. Shane, Craswell Nick, and Rom Ofri. 2019. Relevance modeling with multiple query variations. In Proc. SIGIR. ACM, New York, NY, 2734. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Luo Jiyun, Wing Christopher, Yang Hui, and Hearst Marti. 2013. The water filling model and the cube test: Multi-dimensional evaluation for professional search. In Proc. CIKM. ACM, New York, NY, 709714. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Luo Jiyun, Zhang Sicong, and Yang Hui. 2014. Win-win Search: Dual-agent stochastic game in session search. In Proc. SIGIR. ACM, New York, NY, 587596. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Marco Angelini, Fazzini Vanessa, Ferro Nicola, Santucci Giuseppe, and Silvello Gianmaria. 2018. CLAIRE: A combinatorial visual analytics system for information retrieval evaluation. Inf. Proc. Man. 54, 6 (2018), 10771100.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Maxwell David, Azzopardi Leif, and Moshfeghi Yashar. 2019. The impact of result diversification on search behaviour and performance. Inf. Retr. 22, 5 (16 May 2019), 422446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Mitra Bhaskar, Craswell Nick, et al. 2018. An introduction to neural information retrieval. Found. Trends in IR 13, 1 (2018), 1126.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Moraes Felipe. 2017. On Effective Dynamic Search Systems. Master thesis. University of Minas Gerais.Google ScholarGoogle Scholar
  53. [53] Moraes Felipe, Santos Rodrygo L. T., and Ziviani Nivio. 2017. On effective dynamic search in specialized domains. In Proc. ICTIR. ACM, New York, NY, 177184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Moraes Felipe, Santos Rodrygo L. T., and Ziviani Nivio. 2016. UFMG at the TREC 2016 dynamic domain track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  55. [55] Over Paul. 1999. TREC-7 interactive track report. In Proc. TREC (Gaithersburg, Maryland). National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  56. [56] Ponte Jay M. and Croft W. Bruce. 1998. A language modeling approach to information retrieval. In Proc. SIGIR. ACM, New York, NY, 275281. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Qin Xubo, Dou Zhicheng, and Wen Ji-Rong. 2020. Diversifying search results using self-attention network. In Proc. CIKM. ACM, New York, NY, 12651274. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Rahimi Razieh and Yang Grace Hui. 2016. An investigation of basic retrieval models for the dynamic domain task. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  59. [59] Raiber Fiana and Khurland Oren. 2019. Relevance feedback: The whole is inferior to the sum of its parts. ACM Trans. Inf. Sys. 37, 4, Article 44 (Oct. 2019), 28 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Raman Karthik, Bennett Paul N., and Collins-Thompson Kevyn. 2013. Toward whole-session relevance: Exploring intrinsic diversity in web search. In Proc. SIGIR. ACM, New York, NY, 463472. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. [61] Raman Karthik, Joachims Thorsten, and Shivaswamy Pannaga. 2011. Structured learning of two-level dynamic rankings. In Proc. CIKM. ACM, New York, NY, 291296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Rocchio J. J.. 1971. Relevance Feedback in Information Retrieval. Prentice Hall, USA.Google ScholarGoogle Scholar
  63. [63] Roitero Kevin, Carterrete Ben, Mehrotra Rishabh, and Lalmas Mounia. 2020. Leveraging behavioral heterogeneity across markets for cross-market training of recommender systems. In Proc. WWW. ACM, New York, NY, 694702. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Santos Rodrygo L. T., Macdonald Craig, and Ounis Iadh. 2010. Exploiting query reformulations for web search result diversification. In Proc. WWW. ACM, New York, NY, 881890. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Santos Rodrygo L. T., Macdonald Craig, and Ounis Iadh. 2015. Search result diversification. Found. Trends IR 9, 1 (2015), 190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. [66] Shen Xuehua and Zhai ChengXiang. 2005. Active feedback in ad hoc information retrieval. In Proc. SIGIR. ACM, New York, NY, 5966. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Sloan Marc and Wang Jun. 2015. Dynamic information retrieval: Theoretical framework and application. In Proc. ICTIR. ACM, New York, NY, 6170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Sørig Esben, Collignon Nicolas, Fiebrink Rebecca, and Kando Noriko. 2019. Evaluation of rich and explicit feedback for exploratory search. In Proc. CHIIR. ACM, New York, NY.Google ScholarGoogle Scholar
  69. [69] Stephen Robertson, Walker S., Jones S., Hancock-Beaulieu M. M., and Gatford M.. 1996. Okapi at TREC-3. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD, 109126.Google ScholarGoogle Scholar
  70. [70] Tague-Sutcliffe J. and Blustein J.. 1994. A statistical analysis of the TREC-3 data. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  71. [71] Tang Zhiwen and Yang Grace Hui. 2017. Georgetown university at TREC 2017 dynamic domain track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  72. [72] Voorhees Ellen M., Samarov Daniel, and Soboroff Ian. 2017. Using replicates in information retrieval evaluation. ACM Trans. Inf. Sys. 36, 2, Article 12 (Aug. 2017), 21 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Wu Zhijing, Mao Jiaxin, Liu Yiqun, Zhang Min, and Ma Shaoping. 2019. Investigating passage-level relevance and its role in document-level relevance judgment. In Proc. SIGIR. ACM, New York, NY, 605614. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Xia Long, Xu Jun, Lan Yanyan, Guo Jiafeng, and Cheng Xueqi. 2015. Learning maximal marginal relevance model via directly optimizing diversity evaluation measures. In Proc. SIGIR. ACM, New York, NY, 113122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Xia Long, Xu Jun, Lan Yanyan, Guo Jiafeng, and Cheng Xueqi. 2016. Modeling document novelty with neural tensor network for search result diversification. In Proc. SIGIR. ACM, New York, NY, 395404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Xu Jun, Xia Long, Lan Yanyan, Guo Jiafeng, and Cheng Xueqi. 2017. Directly optimize diversity evaluation measures: A new approach to search result diversification. ACM Trans. Int. Syst. Technol. 8, 3 (2017), 126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Yan Le, Qin Zhen, Pasumarthi Rama Kumar, Wang Xuanhui, and Bendersky Michael. 2021. Diversification-aware learning to rank using distributed representation. In Proc. Web (WWW’21). ACM, New York, NY, 10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Yang Angela and Yang Grace Hui. 2017. A contextual bandit approach to dynamic search. In Proc. ICTIR. ACM, New York, NY, 301304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Yang Hui, Frank John, and Soboroff Ian. 2015. TREC 2015 dynamic domain track overview. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  80. [80] Yang Hui and Soboroff Ian. 2016. TREC 2016 dynamic domain track overview. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  81. [81] Yang Hui, Tang Zhiwen, and Soboroff Ian. 2017. TREC 2017 dynamic domain track overview. In Proc. TREC, Vol. 500-324. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  82. [82] Zampieri Fabio, Roitero Kevin, Culpepper J. Shane, Kurland Oren, and Mizzaro Stefano. 2019. On topic difficulty in ir evaluation: The effect of systems, corpora, and system components. In Proc. SIGIR. ACM, New York, NY, 909912. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Zhai Chengxiang and Lafferty John. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst. 22, 2 (April 2004), 179214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. [84] Zhang Weimin, Hu Yong-Cheng, Jia Rongqian, Wang Xianfa, Zhang Le, Feng Yue, Yu Simon Chun Ho, Xue Yuanhai, Yu Xiaoming, Liu Yue, and Cheng Xueqi. 2017. ICTNET at TREC 2017 dynamic domain track. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD.Google ScholarGoogle Scholar
  85. [85] Zhang Weinan, Zhao Xiangyu, Zhao Li, Yin Dawei, Yang Grace Hui, and Beutel Alex. 2020. Deep reinforcement learning for information retrieval: Fundamentals and advances. In Proc. SIGIR. ACM, New York, NY, 24682471. Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. [86] Zhao Le, Liang Chenmin, and Callan Jamie. 2008. Extending relevance model for relevance feedback. In Proc. TREC. National Institute of Standards and Technology (NIST), Gaithersburg, MD, 500–277.Google ScholarGoogle Scholar
  87. [87] Zhiwen Tang and Yang Grace Hui. 2020. Corpus-level end-to-end exploration for interactive systems. In Proc. AAAI. AAAI Press, 25272534.Google ScholarGoogle Scholar
  88. [88] Zhou Jianghong and Agichtein Eugene. 2020. RLIRank: Learning to rank with reinforcement learning for dynamic search. In Proc. WWW. ACM, New York, NY, 28422848. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. [89] Zhu Yadong, Lan Yanyan, Guo Jiafeng, Cheng Xueqi, and Niu Shuzi. 2014. Learning for search result diversification. In Proc. SIGIR. ACM, New York, NY, 293302.Google ScholarGoogle Scholar

Index Terms

  1. Component-based Analysis of Dynamic Search Performance

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM Transactions on Information Systems
              ACM Transactions on Information Systems  Volume 40, Issue 3
              July 2022
              650 pages
              ISSN:1046-8188
              EISSN:1558-2868
              DOI:10.1145/3498357
              Issue’s Table of Contents

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 22 November 2021
              • Accepted: 1 August 2021
              • Revised: 1 July 2021
              • Received: 1 December 2020
              Published in tois Volume 40, Issue 3

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Refereed

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            Full Text

            View this article in Full Text.

            View Full Text

            HTML Format

            View this article in HTML Format .

            View HTML Format