Abstract
Studies in interactive information retrieval (IIR) indicate that expert searchers differ from novices in many ways. In the present paper, we identify a number of behavioral dimensions along which searchers differ (e.g. cost, gain and the accuracy of relevance assessment). We quantify these differences using simulated, multi-query search sessions. We then explore each dimension in turn to determine what differences are most effective in yielding superior retrieval performance. The more precise action probabilities in assessing snippets and documents contribute less to the overall cumulative gain during a session than gain and cost structures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Azzopardi, L.: The economics of interactive information retrieval. In: 34th Int. SIGIR Conf., pp. 15–24. ACM, New York (2011)
Azzopardi, L., Järvelin, K., Kamps, J., Smucker, M.: Report on the SIGIR 2010 workshop on the simulation of interaction. SIGIR Forum 44, 35–47 (2010)
Baskaya, F., Keskustalo, H., Järvelin, K.: Simulating simple and fallible relevance feedback. In: Clough, P., Foley, C., Gurrin, C., Jones, G.J., Kraaij, W., Lee, H., Mudoch, V. (eds.) ECIR 2011. LNCS, vol. 6611, pp. 593–604. Springer, Heidelberg (2011)
Baskaya, F., Keskustalo, H., Järvelin, K.: Time drives interaction: simulating sessions in diverse searching environments. In: 35th Int. SIGIR Conf., pp. 97–106. ACM, New York (2012)
Baskaya, F., Keskustalo, H., Järvelin, K.: Modeling behavioral factors in interactive information retrieval. In: 22nd CIKM Conf., pp. 2297–2302. ACM, New York (2013)
Carterette, B., Kanoulas, E., Yilmaz, E.: Simulating simple user behavior for system effectiveness evaluation. In: 20th CIKM Conf., pp. 611–620. ACM, New York (2011)
Dupret, G., Piwowarski, B.: Model based comparison of discounted cumulative gain and average precision. J. of Discrete Algorithms 18, 49–62 (2013)
Gwizdka, J.: Characterizing relevance with eye-tracking measures. In: 5th IIiX Symposium, pp. 58–67. ACM, New York (2014)
Harman, D.: Relevance feedback revisited. In: 15th Int. SIGIR Conf, pp. 1–10. ACM, New York (1992)
Harman, D.: Information retrieval evaluation. Morgan & Claypool (2011)
Järvelin, K.: Interactive relevance feedback with graded relevance and sentence extraction: simulated user experiments. In: 18th CIKM Conf, pp. 2053–2056. ACM, New York (2009)
Kharazmi, S., Karimi, S., Scholer, F., Clark, A.: A study of querying behaviour of expert and non-expert users of biomedical search systems. In: 19th Australasian Document Computing Symposium (2014). doi:10.1145/2682862.2682871
Kelly, D., Cool, C.: The effects of topic familiarity on information search behavior. In: 2nd JCDL, pp. 74–75. ACM, New York (2002)
Keskustalo, H., Järvelin, K., Pirkola, A.: Evaluating the effectiveness of relevance feedback based on a user simulation model: effects of a user scenario on cumulated gain value. Information Retrieval 11, 209–228 (2008)
Keskustalo, H., Järvelin, K., Pirkola, A., Sharma, T., Lykke, M.: Test collection-based IR evaluation needs extension toward sessions – a case of extremely short queries. In: Lee, G.G., Song, D., Lin, C.-Y., Aizawa, A., Kuriyama, K., Yoshioka, M., Sakai, T. (eds.) AIRS 2009. LNCS, vol. 5839, pp. 63–74. Springer, Heidelberg (2009)
Maxwell, D., Azzopardi, L., Järvelin, K., Keskustalo, H.: An initial investigation into fixed and adaptive stopping strategies. In: 38th Int. SIGIR Conf., pp. 903–906. ACM, New York (2015)
Rabin, M.: Probabilistic automata. Information and Control 6, 230–245 (1963)
Rabin, M., Scott, D.: Finite automata and their decision problems. IBM Journal of Research and Development 3, 114–125 (1959)
Smucker, M.D.: Towards timed predictions of human performance for interactive information retrieval evaluation. In: 3rd Workshop on HCIR (2009). http://cuaslis.org/hcir2009/HCIR2009.pdf
Smucker, M.D.: An analysis of user strategies for examining and processing ranked lists of documents. In: 5th Workshop on HCIR (2011). https://sites.google.com/site/hcirworkshop/hcir-2011/papers
Smucker, M.D., Clarke, C.: Time-based calibration of effectiveness measures. In: 35th Int. SIGIR Conf., pp. 95–104. ACM, New York (2012)
Turpin, A., Scholer, F., Järvelin, K., Wu, M.F., Culpepper, S.: Including summaries in system evaluations. In: 32nd Int. SIGIR Conf., pp. 508–515. ACM, New York (2009)
Vakkari, P., Sormunen, E.: The influence of relevance levels on the effectiveness of interactive information retrieval. JASIST 55, 963–969 (2004)
White, R., Dumais, S.T., Teevan, J.: Characterizing the influence of domain expertise on web search behavior. In: 2nd WSDM Conf., pp. 132–141. ACM, New York (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Pääkkönen, T. et al. (2015). Exploring Behavioral Dimensions in Session Effectiveness. In: Mothe, J., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2015. Lecture Notes in Computer Science(), vol 9283. Springer, Cham. https://doi.org/10.1007/978-3-319-24027-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-24027-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-24026-8
Online ISBN: 978-3-319-24027-5
eBook Packages: Computer ScienceComputer Science (R0)