Abstract
For the user’s point of view, in large environments, it can be desirable to have Information Retrieval Systems (IRS) that retrieve documents according to their relevance levels. Relevance levels have been studied in some previous Information Retrieval (IR) works while some others (few) IR research works tackled the questions of IRS effectiveness and collections size. These latter works used standard IR measures on collections of increasing size to analyze IRS effectiveness scalability. In this work, we bring together these two issues in IR (multigraded relevance and scalability) by designing some new metrics for evaluating the ability of IRS to rank documents according to their relevance levels when collection size increases.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Lyman, P., Varian, H.R., Swearingen, K., Charles, P., Good, N., Jordan, L.L., Pal, J.: How much informations 2003 (2003), http://www.sims.berkeley.edu/research/projects/how-much-info-2003/
Mizzaro, S.: How many relevances in information retrieval? Interacting with Computers 10, 303–320 (1998)
Barry, C.L.: User-de.ned relevance criteria: an exploratory study. Journal of the American Society for Information Science 45, 149–159 (1994)
Saracevic, T.: Relevance: A review of and a framework for the thinking on the notion in information science. Journal of the American Society for Information Science 26, 321–343 (1975)
Schamber, L., Eisenberg, M.B., Nilan, M.S.: A re-examination of relevance: toward a dynamic, situational definition. Information Processing and Management 26, 755–776 (1990)
Wilson, P.: Situational relevance. Information Storage and Retrieval 9, 457–471 (1973)
Cooper, W.S.: A definition of relevance for information retrieval. Information Storage and Retrieval (1971)
Cosijn, E., Ingwersen, P.: Dimensions of relevance. Information Processing and Management 36, 533–550 (2000)
Rees, A.M., Schulz, D.G.: A field experimental approach to the study of relevance assessments in relation to document searching. 2 vols. Technical Report NSF Contract No. C-423, Center for Documentation and Communication Research, School of Library Science (1967)
Cuadra, C.A., Katter, R.V.: The relevance of relevance assessment. In: Proceedings of the American Documentation Institute, Washington, DC, vol. 4, pp. 95–99 (1967)
Kekäläinen, J., Järvelin, K.: Using graded relevance assessments in ir evaluation. Journal of the American Society for Information Science and Technology 53, 1120–1129 (2002)
Tang, R., William, M., Shaw, J., Vevea, J.L.: Towards the identification of the optimal number of relevance categories. Journal of the American Society for Information Science 50, 254–264 (1999)
Spink, A., Greisdorf, H., Bateman, J.: From highly relevant to not relevant: examining different regions of relevance. Information Processing and Management: an International Journal 34, 599–621 (1998)
Voorhees, E.M.: Evaluation by highly relevant documents. In: Proceedings of the 24th annual international ACM SIGIR Conference, pp. 74–82 (2001)
Ntcir workshop 1: Proceedings of the first ntcir workshop on retrieval in Japanese text retrieval and term recognition, tokyo, japan. In: Kando, N., Nozue, T., eds.: NTCIR (1999)
Järvelin, K., Kekäläinen, J.: Ir evaluation methods for retrieving highly relevant documents. In: Proceedings of the 23th annual international ACM SIGIR Conference, pp. 41–48 (2000)
Sakai, T.: Average gain ratio: A simple retrieval performance measure for evaluation with multiple relevance levels. In: SIGIR 2003 (2003)
Kekäläinen, J.: Binary and graded relevance in ir evaluations -comparison of the effects on rankings of ir systems. Information Processing an Management 41 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Imafouo, A., Beigbeder, M. (2006). Evaluating Scalability in Information Retrieval with Multigraded Relevance. In: Ng, H.T., Leong, MK., Kan, MY., Ji, D. (eds) Information Retrieval Technology. AIRS 2006. Lecture Notes in Computer Science, vol 4182. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11880592_44
Download citation
DOI: https://doi.org/10.1007/11880592_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-45780-0
Online ISBN: 978-3-540-46237-8
eBook Packages: Computer ScienceComputer Science (R0)