Abstract
Multiple methods exist for evaluating search systems, ranging from more user-oriented approaches to those more focused on evaluating system performance. When preparing an evaluation, key questions include: (i) why conduct the evaluation, (ii) what should be evaluated, and (iii) how the evaluation should be conducted. Over recent years there has been more focus on the end users of search systems and understanding what they view as ‘success’. In this paper we consider what to evaluate; in particular what criteria users of search systems consider most important and whether this varies by user characteristic. Using our experience with evaluating an academic library catalogue, input was gathered from end users relating to the perceived importance of different evaluation criteria prior to conducting an evaluation. We analyse results to show which criteria users most value, together with the inter-relationships between them. Our results highlight the necessity of conducting multiple forms of evaluation to ensure that search systems are deemed successful by their users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Saracevic, T.: Evaluation of evaluation in information retrieval. In: Fox, E., Ingwersen, P., Fidel, R. (eds.) Proc. 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, July 9-13, pp. 138–146. ACM Press, New York (1995)
Harman, D.: Information retrieval evaluation. Synthesis Lectures on Information Concepts, Retrieval, and Services, vol. 3(2). Morgan & Claypool Publishers, San Raphael (2011)
Robertson, S.E., Hancock-Beaulieu, M.: On the evaluation of information retrieval systems. Information Processing and Management 28(4), 457–466 (1992)
Robertson, S.: On the history of evaluation in IR. Journal of Information Science 34(4), 439–456 (2008)
Voorhees, E.M., Harman, D.K.: TREC: experiments and evaluation in information retrieval. MIT Press, Cambridge (2005)
Ingwersen, P., Järvelin, K.: The turn: integration of information seeking and retrieval in context. Springer, New York (2005)
Borlund, P.: User-Centred Evaluation of Information Retrieval Systems. In: Göker, A., Davies, J. (eds.) Information Retrieval: Searching in the 21st Century. John Wiley & Sons, Chichester (2009)
Kelly, D.: Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval 3(1-2), 1–224 (2009)
van Rijsbergen, C.J.: Information retrieval, 2nd edn. Butterworths, London (1979)
Saracevic, T.: Digital Library Evaluation: Toward Evolution of Concepts. Library Trends 49(2), 350–369 (2000)
Fuhr, N., et al.: Evaluation of Digital Libraries. International Journal on Digital Libraries 8(1), 21–38 (2007)
Tsakonas, G., Papatheodorou, C.: Exploring Usefulness and Usability in the Evaluation of Open Access Digital Libraries. Information Processing & Management 44(3), 1234–1250 (2008)
Buchanan, S., Salako, A.: Evaluating the Usability and Usefulness of a Digital Library. Library Review 58(9), 638–651 (2009)
Nielsen, J.: Enhancing the Explanatory Power of Usability Heuristics. In: Proc. SIGCHI Conference on Human Factors in Computing Systems, pp. 152–158. ACM Press, New York (1994)
Toms, E.G., O’Brien, H.L., Kopak, R., Freund, L.: Searching for relevance in the relevance of search. In: Crestani, F., Ruthven, I. (eds.) CoLIS 2005. LNCS, vol. 3507, pp. 59–78. Springer, Heidelberg (2005)
Al-Maskari, A., Sanderson, M.: A Review of Factors Influencing User Satisfaction in Information Retrieval. Journal of the American Society for Information Science and Technology 61(5), 859–868 (2010)
Xie, H.: Users’ Evaluation of Digital Libraries (Dls): Their Uses, Their Criteria, and Their Assessment. Information Processing & Management 44(3), 1346–1373 (2008)
Xie, H.: Evaluation of Digital Libraries: Criteria and Problems from Users’ Perspectives. Library and Information Science Research 28(3), 433–452 (2006)
Kelly, D., et al.: Evaluation Challenges and Directions for Information-Seeking Support Systems. Computer 42(3), 60–66 (2009)
Marchionini, G.: Information seeking in electronic environments. Cambridge University Press, Cambridge (1995)
Hölscher, C., Strube, G.: Web Search Behavior of Internet Experts and Newbies. Computer Networks 33(1), 337–346 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Clough, P., Goodale, P. (2013). Selecting Success Criteria: Experiences with an Academic Library Catalogue. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B. (eds) Information Access Evaluation. Multilinguality, Multimodality, and Visualization. CLEF 2013. Lecture Notes in Computer Science, vol 8138. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40802-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-40802-1_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40801-4
Online ISBN: 978-3-642-40802-1
eBook Packages: Computer ScienceComputer Science (R0)