Abstract
The dominant method for evaluating search engines is the Cranfield paradigm, but the existing metrics do not consider some modern search engines features, such as document snippets. In this paper, we propose a new metric effective time ratio for search engine evaluation. Effective time ratio measures the ratio between effective time a user spent on getting relevant information and the total search time. For retrieval system without presenting document snippet, its value is identical to precision. For search engine with snippet, some theoretical analysis proves that its value can reflect both retrieval system performance and snippet quality. We further deploy a real user study, showing that effective time ratio can reflect users’ satisfaction better than the existing metrics based on document relevance and/or snippet relevance.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Kagolovsky, Y., Moehr, J.R.: Current status of the evaluation of information retrieval. J. Med. Syst. 27(5), 409–424 (2003)
Harter, S., Hert, C.: Evaluation of information retrieval systems: approaches, is- sues, and methods. In: ARIST, pp. 3–99 (1997)
Hersh, W., Turpin, A., Price, S., Chan, B., Kramer, D., Sacherek, L., Olson, D.: Do batch and user evaluations give the same results? In: SIGIR 2000: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 17–24. ACM, New York (2000)
Turpin, A.H., Hersh, W.: Why batch and user evaluations do not give the same results. In: SIGIR ’01: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 225–231. ACM, New York (2001)
Allan, J., Carterette, B., Lewis, J.: When will information retrieval be ”good enough”? In: SIGIR 2005: Proceedings of the 28th Annual Iternational ACM SIGIR Cnference on Research and Dvelopment in information Retrival, pp. 40–433. ACM, New York (2005)
Huffman, S.B., Hochster, M.: How well does result relevance predict session satis- faction? In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 567–574. ACM, New York (2007)
Al-Maskari, A., Sanderson, M., Clough, P.: The relationship between ir effectiveness measures and user satisfaction. In: SIGIR 2007: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 773–774. ACM, New York (2007)
Turpin, A., Scholer, F., Jarvelin, K., Wu, M., Culpepper, J.S.: Including summaries in system evaluation. In: SIGIR 2009: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 508–515. ACM, New York (2009)
Moffat, A., Zobel, J.: Rank-biased precision for measurement of retrieval effective- ness. ACM Trans. Inf. Syst. 27(1), 1–27 (2008)
Chapelle, O., Zhang, Y.: A dynamic bayesian network click model for web search ranking. In: WWW 2009: Proceedings of the 18th International Conference on World Wide Web, pp. 1–10. ACM, New York (2009)
Guo, F., Liu, C., Kannan, A., Minka, T., Taylor, M., Wang, Y.M., Faloutsos, C.: Click chain model in web search. In: WWW 2009: Proceedings of the 18th International Conference on World Wide Web, pp. 11–20. ACM, New York (2009)
Varadarajan, R., Hristidis, V.: A system for query-specific document summarization. In: CIKM 2006: Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pp. 622–631. ACM, New York (2006)
Liang, S., Devlin, S., Tait, J.: Evaluating web search result summaries. In: Lalmas, M., MacFarlane, A., Rüger, S.M., Tombros, A., Tsikrika, T., Yavlinsky, A. (eds.) ECIR 2006. LNCS, vol. 3936, pp. 96–106. Springer, Heidelberg (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
He, J., Shu, B., Li, X., Yan, H. (2010). Effective Time Ratio: A Measure for Web Search Engines with Document Snippets. In: Cheng, PJ., Kan, MY., Lam, W., Nakov, P. (eds) Information Retrieval Technology. AIRS 2010. Lecture Notes in Computer Science, vol 6458. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17187-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-17187-1_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-17186-4
Online ISBN: 978-3-642-17187-1
eBook Packages: Computer ScienceComputer Science (R0)