Skip to main content

Search Snippet Evaluation at Yandex: Lessons Learned and Future Directions

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6941))

Abstract

In this paper, we present a Wikipedia-based approach to query expansion for the task of image retrieval, by combining salient encyclopaedic concepts with the picturability of words. Our model generates the expanded query terms in a definite two-stage process instead of multiple iterative passes, requires no manual feedback, and is completely unsupervised. Preliminary results show that our proposed model is effective in a comparative study on the ImageCLEF 2010 Wikipedia dataset.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1CLICK@NTCIR-9, http://research.microsoft.com/en-us/people/tesakai/1click.aspx

  2. Alonso, O., Baeza-Yates, R., Gertz, M.: Effectiveness of Temporal Snippets. In: WSSP Workshop at the World Wide Web Conference, WWW (2009)

    Google Scholar 

  3. Clarke, C., Agichtein, E., Dumais, S., White, R.W.: The Influence of Caption Features on Clickthrough Patterns in Web Search. In: SIGIR 2007 (2007)

    Google Scholar 

  4. Cutrell, E., Guan, Z.: What Are You Looking For? An Eye-tracking Study of Information Usage in Web Search. In: CHI 2007 (2007)

    Google Scholar 

  5. HARD, High Accuracy Retrieval from Documents. TREC 2003 track guidelines, http://ciir.cs.umass.edu/research/hard/guidelines2003.html

  6. Hovy, E.: Lin, C.-Y., Zhou, L.: Evaluating DUC 2005 Using Basic Elements. In: Fifth Document Understanding Conference (DUC), Vancouver, Canada (2005)

    Google Scholar 

  7. INEX 2011 Snippet Retrieval Track, https://inex.mmci.uni-saarland.de/tracks/snippet/

  8. Iofciu, T., Craswell, N., Shokouhi, M.: Evaluating the Impact of Snippet Highlighting in Search. In: Understanding the User Workshop, SIGIR 2009 (2009)

    Google Scholar 

  9. Jijkoun, V., de Rijke, M.: Overview of webCLEF 2008. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 787–793. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  10. Kaisser, M., Hearst, M. A., Lowe, J. B.: Improving Search Results Quality by Customizing Summary Lengths. In: ACL 2008 HLT (2008)

    Google Scholar 

  11. Kanungo, T., Orr, D.: Predicting the Readability of Short Web Summaries. In: WSDM 2009 (2009)

    Google Scholar 

  12. Leal Bando, L., Scholer, F., Turpin, A.: Constructing Query-biased Summaries: a Comparison of Human and System Generated Snippets. In: IIiX 2010 (2010)

    Google Scholar 

  13. Lin, C.-Y.: ROUGE: A Package for Automatic Evaluation of Summaries. In: ACL 2004 Workshop: Text Summarization Branches Out, Barcelona, Spain (2004)

    Google Scholar 

  14. Nenkova, A., Passonneau, R. J., McKeown, K.: The Pyramid Method: Incorporating Human Content Selection Variation in Summarization Evaluation. TSLP 4(2) (2007)

    Google Scholar 

  15. Overwijk, A., Nguyen, D., Hauff, C., Trieschnigg, D., Hiemstra, D., de Jong, F.: On the Evaluation of Snippet Selection for WebCLEF. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, pp. 794–797. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  16. Radlinski, F., Kurup, M., Joachims, T.: How Does Clickthrough Data Reflect Retrieval Quality? In: CIKM 2008 (2008)

    Google Scholar 

  17. Tombros, A., Sanderson, M.: Advantages of Query Biased Summaries in Information Retrieval. In: SIGIR 1998 (1998)

    Google Scholar 

  18. Turpin, A., Scholer, F., Jarvelin, K., Wu, M., Culpepper, J.S.: Including Summaries in System Evaluations. In: SIGIR 2009 (2009)

    Google Scholar 

  19. Wade, C., Allan, J.: Passage Retrieval and Evaluation. Technical report, CIIR, University of Massachusetts, Amherst (2005)

    Google Scholar 

  20. White, R.W., Jose, J.M., Ruthven, I.: A Task-Oriented Study on the Influencing Effects of Query-Biased Summarisation in Web Searching. Information Processing and Management 39 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Savenkov, D., Braslavski, P., Lebedev, M. (2011). Search Snippet Evaluation at Yandex: Lessons Learned and Future Directions. In: Forner, P., Gonzalo, J., Kekäläinen, J., Lalmas, M., de Rijke, M. (eds) Multilingual and Multimodal Information Access Evaluation. CLEF 2011. Lecture Notes in Computer Science, vol 6941. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23708-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23708-9_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23707-2

  • Online ISBN: 978-3-642-23708-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics