Skip to main content

Evaluating Real Patent Retrieval Effectiveness

  • Chapter
  • 1658 Accesses

Part of the book series: The Information Retrieval Series ((INRE,volume 29))

Abstract

In this chapter we consider the nature of Information Retrieval evaluation for patent searching. We outline the challenges involved in conducting patent searches and the commercial risks inherent in patent searching. We highlight some of the main challenges of reconciling how we evaluate retrieval systems in the laboratory and the needs of patent searchers, concluding with suggestions for the development of more informative evaluation procedures for patent searching.

This is a preview of subscription content, log in via an institution.

Notes

  1. 1.

    http://clef.iei.pi.cnr.it.

  2. 2.

    http://research.nii.ac.jp/ntcir.

  3. 3.

    With the possible exception of INEX which does consider the relative relevance of sub-document units which may have overlapping content.

  4. 4.

    Exhaustive query assessments also mean that we can assess the quality of the original query itself.

References

  1. Joho H, Azzopardi L, Vanderbauwhede W (2010) A survey of patent users: an analysis of tasks, behavior, search functionality and system requirements. In: 3rd symposium on information interaction in context (IIiX ’10)

    Google Scholar 

  2. Spärck Jones K, Willett P (eds) (1997) Readings in information retrieval. Morgan Kaufmann, San Francisco

    Google Scholar 

  3. Voorhees EM, Harman D (eds) (2005) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge

    Google Scholar 

  4. Ingwersen P, Järvelin K (2005) The Turn: Integration of information seeking and retrieval in context. Springer, Heidelberg

    MATH  Google Scholar 

  5. Hansen P, Järvelin K (2005) Collaborative information retrieval in an information-intensive domain. Inf Process Manag 41:1101–1119

    Article  Google Scholar 

  6. Blair DC, Maron ME (1985) An evaluation of retrieval effectiveness for a full-text document-retrieval system. Commun ACM 28:289–299

    Article  Google Scholar 

  7. Voorhees EM (2002) The philosophy of information retrieval evaluation. In: CLEF ’01: Revised papers from the second workshop of the cross-language evaluation forum on evaluation of cross-language information retrieval systems, pp 355–370

    Google Scholar 

  8. Swanson DR (1989) Historical note: Information retrieval and the future of an illusion. J Am Soc Inf Sci Technol 39:92–98

    Article  Google Scholar 

  9. Voorhees EM (2005) The TREC robust retrieval track. ACM SIGIR Forum 39:11–20

    Article  Google Scholar 

  10. Blair DC (1996) STAIRS redux: Thoughts on the STAIRS evaluation, ten years after. J Am Soc Inf Sci Technol 47:4–22

    Article  Google Scholar 

  11. Blair DC (2002) The challenge of commercial document retrieval, Part I: major issues, and a framework based on search exhaustivity, determinacy of representation and document collection size. Inf Process Manag 38:273–291

    Article  MATH  Google Scholar 

  12. Blair DC (1980) Searching biases in large interactive document retrieval systems. J Am Soc Inf Sci 31:271–277

    Article  Google Scholar 

  13. Sanderson M, Zobel Z (2005) Information retrieval system evaluation: effort, sensitivity, and reliability. In: 28th annual international ACM SIGIR conference on research and development in information retrieval, pp 161–169

    Google Scholar 

  14. Spärck Jones K (2005) Epilogue: Metareflections on TREC. In: Voorhees EM, Harman DK (eds) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge, pp 421–448

    Google Scholar 

  15. Vakkari P (2000) Cognition and changes of search terms and tactics during task performance: a longitudinal study. In: RIAO 2004 (Recherche d’information assistée par ordinateur), pp 894–907

    Google Scholar 

  16. Huang MH, Wang HY (2004) The influence of document presentation order and number of documents judged on users’ judgements of relevance. J Am Soc Inf Sci Technol 55:970–979

    Article  Google Scholar 

  17. Hersh WR, Turpin A, Price S, Chan B, Kraemer D, Sacherek L, Olson D (2000) Do batch and user evaluation give the same results. In: 23rd annual international ACM SIGIR conference on research and development in information retrieval, pp 17–24

    Chapter  Google Scholar 

  18. Kelly D, Fu X, Shah C (2010) Effects of position and number of relevant documents retrieved on users’ evaluations of system performance. ACM Trans Inf Syst 28:9:1–9:26

    Article  Google Scholar 

  19. Smith CL, Kantor PB (2008) User adaptation: good results from poor systems. In: 31st annual international ACM SIGIR conference on research and development in information retrieval, pp 147–154

    Chapter  Google Scholar 

  20. Hersh W, Turpin A (2001) Why batch and user evaluations do not give the same results. In: 24th annual international ACM SIGIR conference on research and development in information retrieval, pp 225–231

    Google Scholar 

  21. Harter SP (1996) Variations in relevance assessments and the measurement of retrieval effectivness. J Am Soc Inf Sci Technol 47:37–49

    Article  Google Scholar 

  22. Spärck Jones K (2006) What’s the value of TREC—is there a gap to jump or a chasm to bridge? ACM SIGIR Forum 40:10–20

    Article  Google Scholar 

  23. Barry CL, Schamber L (1998) Users’ criteria for relevance evaluation: a cross-situational comparison. Inf Process Manag 34:291–236

    Article  Google Scholar 

  24. Ruthven I, Baillie M, Elsweiler D (2007) The relative effects of knowledge, interest and confidence in assessing relevance. J Doc 63:482–504

    Article  Google Scholar 

  25. Ruthven I, Baillie M, Azzopardi L, Bierig R, Nicol E, Sweeney S, Yakici M (2008) Contextual factors affecting the utility of surrogates within exploratory search. Inf Process Manag 44:437–462

    Article  Google Scholar 

  26. Borgman CL, Hirsh SG, Hiller J (1996) Rethinking online monitoring methods for information retrieval systems: from search product to search process. J Am Soc Inf Sci Technol 47:568–583

    Article  Google Scholar 

  27. Sormunen E, Pennanen S (2004) The challenge of automated tutoring in web-based learning environments for IR instruction. Inf Res 9:169

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anthony Trippe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Trippe, A., Ruthven, I. (2011). Evaluating Real Patent Retrieval Effectiveness. In: Lupu, M., Mayer, K., Tait, J., Trippe, A. (eds) Current Challenges in Patent Information Retrieval. The Information Retrieval Series, vol 29. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19231-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-19231-9_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-19230-2

  • Online ISBN: 978-3-642-19231-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics