Skip to main content

A Use Case Framework for Information Access Evaluation

  • Chapter

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8830))

Abstract

Information access is no longer only a question of retrieving topical text documents in a work-task related context. Information search has become one of the most common uses of the personal computers; a daily task for millions of individual users searching for information motivated by information needs they experience for some reason, momentarily or continuously. Instead of professionally edited text documents, multilingual and multimedia content from a variety of sources of varying quality needs to be accessed. Even the scope of the research efforts in the field must therefore be broadened to better capture the mechanisms for the systems’ impact, take-up and success in the marketplace. Much work has been carried out in this direction: graded relevance, and new evaluation metrics, more varied document collections used in evaluation and different search tasks evaluated. The research in the field is however fragmented. Despite that the need for a common evaluation framework is widely acknowledged, such framework is still not in place. IR system evaluation results are not regularly validated in Interactive IR or field studies; the infrastructure for generalizing Interactive IR results over tasks, users and collections is still missing. This chapter presents a use case-based framework for experimental design in the field of interactive information access. Use cases in general connect system design and evaluation to interaction and user goals, and help identifying test cases for different user groups of a system. We suggest that use cases can provide a useful link even between information access system usage and evaluation mechanisms and thus bring together research from the different related research fields. In this chapter we discuss how use cases can guide the developments of rich models of users, domains, environments, and interaction, and make explicit how the models are connected to benchmarking mechanisms. We give examples of the central features of the different models. The framework is highlighted by examples that sketch out how the framework can be productively used in experimental design and reporting with a minimal threshold for adoption.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahlgren, P.: The effect of indexing strategy-query term combination on retrieval effectiveness in a Swedish full text database. Academic dissertation. Valfrid, Sweden, 166 p. (2004)

    Google Scholar 

  2. Azzopardi, L.: The Economics in Interactive Information Retrieval. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 15–24. ACM (2011)

    Google Scholar 

  3. Baskaya, F., Keskustalo, H., Järvelin, K.: Time Drives Interaction: Simulating Sessions in Diverse Searching Environments. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 105–114. ACM (2012)

    Google Scholar 

  4. Baskaya, F., Keskustalo, H., Järvelin, K.: Modeling Behavioral Factors in Interactive Information Retrieval. In: Proceedings of the 22nd ACM International Conference on Information & Knowledge Management, pp. 2297–2302. ACM (2013)

    Google Scholar 

  5. Bates, M.J.: Information Search Tactics. Journal of the American Society for Information Science 30(4), 205–214 (1979)

    Article  Google Scholar 

  6. Bates, M.J.: The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Online Review 13 (October 1989)

    Google Scholar 

  7. Bennett, J.L.: Interactive bibliographic search as a challenge to interface design. In: Walker, D.E. (ed.) Interactive Bibliographic Search: The User/Computer Interface, pp. 1–16 (1971)

    Google Scholar 

  8. Bennett, J.L.: The user interface in interactive systems. ARIST 7, 159–196 (1972)

    Google Scholar 

  9. Borlund, P.: The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research. An International Electronic Journal 8(3) (2003)

    Google Scholar 

  10. Broder, A.: A taxonomy of web search. ACM SIGIR Forum 36(2) (2002)

    Google Scholar 

  11. Cleverdon, C.W., Keen, M.: Cranfield CERES: Aslib Cranfield research project - Factors determining the performance of indexing systems. Technical report (1966)

    Google Scholar 

  12. Cockburn, A.: Agile software development. Addison-Wesley (2002)

    Google Scholar 

  13. Constantine, L., Lockwood, L.: Software for use: A Practical guide to the models and methods of usage-centered design. Addison-Wesley (2006)

    Google Scholar 

  14. Fuhr, N., Belkin, N., Jose, J., van Rijsbergen, K.: Interactive Information Retrieval. Dagstuhl Seminar Proceedings: number 09101. ISSN 1862-4405. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik, Germany (2009)

    Google Scholar 

  15. Hansen, P., Järvelin, K.: Collaborative information retrieval in an information-intensive domain. Information Processing and Management 41(5), 1101–1119 (2005)

    Article  Google Scholar 

  16. Hansen, P.: Work task-oriented studies on IS&R processes. Developing theoretical and conceptual frameworks to be applied for evaluation and design of tools and systems. In: Fisher, K., Erdelez, S., McKechnie, L. (eds.) Theories of Information Behaviour. ASIST Monograph series, pp. 392–396. ASIST, Medford (2005)

    Google Scholar 

  17. Hearst, M.: “Natural” Search User Interfaces. Communications of the ACM 54(11), 60–67 (2011)

    Article  Google Scholar 

  18. Ingwersen, P., Järvelin, K.: The turn: Integration of Information Seeking and Retrieval in Context. Springer, Dortrecht (2005)

    Google Scholar 

  19. Jacobson, I.: Object-oriented development in an industrial environment. In: Procceedings of OOPSLA 1987: Sigplan Notices, 22(12) (1987)

    Google Scholar 

  20. Jacobson, I., Christerson, M., Jonsson, P., Overgaard, G.: Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley (1992)

    Google Scholar 

  21. Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20(4), 422–446 (2002)

    Article  Google Scholar 

  22. Kanoulas, E., Carterette, B., Clough, P., Sanderson, M.: Evaluating Multi-Query Sessions. In: Proceedings of 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1053–1062 (2011)

    Google Scholar 

  23. Keskustalo, H.: Towards Simulating and Evaluating User Interaction in Information Retrieval using Test Collections. Ph D Dissertation. Univeristy of Tampere: Acta Universitatis Tamperensis 1563 (2010)

    Google Scholar 

  24. Keskustalo, H., Järvelin, K., Pirkola, A., Sharma, T., Lykke, M.: Test Collection-Based IR Evaluation Needs Extension Toward Sessions - A Case of Extremely Short Queries. In: Lee, G.G., Song, D., Lin, C.-Y., Aizawa, A., Kuriyama, K., Yoshioka, M., Sakai, T. (eds.) AIRS 2009. LNCS, vol. 5839, pp. 63–74. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  25. Kuhlthau, C.: Inside the Search Process: Information Seeking from the User’s Perspective. Journal of the American Society for Information Science 42(5), 361–371 (1991)

    Article  Google Scholar 

  26. Kumpulainen, S., Järvelin, K.: Barriers to Task-Based information access in molecular medicine. Journal of the American Society for Information Science and Technology 63(1), 89–97 (2012)

    Article  Google Scholar 

  27. Liu, J., Belkin, N.: Personalizing information retrieval for multi-session tasks: The roles of task stage and task type. In: Proceedings of the 33th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 26–33. ACM (2010)

    Google Scholar 

  28. Moffat, A., Zobel, J.: Rank-Biased Precision for Measurement of Retrieval Effectiveness. ACM Transactions on Information Systems 27(1) (2008)

    Google Scholar 

  29. Murdock, V., Clarke, C., Kamps, J., Karlgren, J.: Proceedings of SEXI 2013 - Workshop on Search and Exploration of X-Rated Information at WSDM 2013 (2013)

    Google Scholar 

  30. Rose, D., Levinson, D.: Understanding user goals in Web search. In: Proceedings of the 13th International ACM Conference on World Wide Web, pp. 13–19. ACM (2004)

    Google Scholar 

  31. Sanderson, M.: Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval 4(4), 247–375 (2010)

    Article  MATH  Google Scholar 

  32. Smucker, M., Clarke, C.: Time-based Calibration of Effectiveness Measures. In: Proceedings of 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 95–104. ACM

    Google Scholar 

  33. Spink, A., Saracevic, T.: Interaction in information retrieval: Selection and effectiveness of search terms. Journal of the American Society for Information Science 48(8), 741–761 (1997)

    Article  Google Scholar 

  34. Su, L.T.: Evaluation Measures for Interactive Information Retrieval. Information Processing and Management 28(4), 503–516 (1992)

    Article  Google Scholar 

  35. Tague-Sutcliffe, J.: The pragmatics of information retrieval experimentation, revisited. Information Processing and Management 28(4), 467–490 (1992)

    Article  Google Scholar 

  36. Vakkari, P., Hakala, N.: Changes in Relevance Criteria and Problem Stages in Task Performance. Journal of Documentation 56(5), 540–562 (2000)

    Article  Google Scholar 

  37. Voorhees, E.M.: The philosophy of information retrieval evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  38. White, R., Dumais, S., Teevan, J.: Characterizing the influence of domain expertise on web search behavior. In: Proceedings of the Second International Conference on Web Search and Data Mining, pp. 132–141. ACM (2009)

    Google Scholar 

  39. White, R., Jose, J., van Rijsbergen, K., Ruthven, I.: Evaluating Implicit Feedback Models Using Searcher Simulations. ACM Transactions on Information Systems 23(3), 325–361 (2005)

    Article  Google Scholar 

  40. Wirfs-Brock, R.: Designing Scenarios: Making the Case for a Use Case Framework. Smalltalk Report (November-December, 1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Hansen, P., Järvelin, A., Eriksson, G., Karlgren, J. (2014). A Use Case Framework for Information Access Evaluation. In: Paltoglou, G., Loizides, F., Hansen, P. (eds) Professional Search in the Modern World. Lecture Notes in Computer Science, vol 8830. Springer, Cham. https://doi.org/10.1007/978-3-319-12511-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-12511-4_2

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-12510-7

  • Online ISBN: 978-3-319-12511-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics