skip to main content
10.1145/2854946.2886106acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
abstract

System And User Centered Evaluation Approaches in Interactive Information Retrieval (SAUCE 2016)

Published:13 March 2016Publication History

ABSTRACT

The purpose of this half-day workshop is to bring together academic and industry interactive information retrieval (IIR) researchers with an interest in evaluation methodologies. The workshop articulates contemporary challenges in the investigation of IIR and invites user- and system-oriented researchers to work collaboratively to address these challenges by combining user- and system-centered methodologies in meaningful ways. We anticipate that this workshop will initiate productive knowledge exchange and partnerships that can respond to the increasing user, task, system, and contextual complexity of the IIR field.

References

  1. Agosti, M., Fuhr, N., Toms, E.G. and Vakkari, P. 2013. Evaluation Methodologies in Information Retrieval (Dagstuhl Seminar 13441). Dagstuhl Report. 3, 10 (2013), 92--126. Available, http://drops.dagstuhl.de/opus/volltexte/2014/4433/.Google ScholarGoogle Scholar
  2. Arapakis, I. Lalmas, M., Cambazoglu, B. B., Marcos, M.-C. and Jose, J.M. 2014. User engagement in online news: Under the scope of sentiment, interest, affect, and gaze. J. Assoc. Inform. Sci. Tech. 65, 10 (March. 2014), 1988--2005. DOI: 10.1002/asi.23096.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Chuklin, A., Markov, I. and de Rijke, M. 2015. Click Models for Web Search. Morgan & Claypool Publishers, USA.Google ScholarGoogle Scholar
  4. Dean-Hall, A., Clarke, C. L. A., Kamps, J., Thomas, P. and Voorhees, E. 2014. Overview of the TREC 2014 Contextual Suggestion Track. In Proceedings of the Text Retrieval Conference. National Institute of Standards and Technology. Available: http://trec.nist.gov/pubs/trec23/papers/overview-context.pdf.Google ScholarGoogle Scholar
  5. Freund, L., Gwizdka, J., Hansen, P., He, J., Kando, N. and Rieh, S Y. (2014). Searching as Learning Workshop, Information Interaction in Context (IIiX) Conference (Regensburg, Germany, August 30, 2014). Available, http://www.diigubc.ca/IIIXSAL/index.html. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gurrin, C., Albatal, R., Joho, H, and Hopfgartner, F. 2015. Lifelog: Pilot Task of NTCIR-12. Available: http://ntcir-lifelog.computing.dcu.ie/.Google ScholarGoogle Scholar
  7. Kelly, D. 2009. Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval. 3(1--2), 1--224. DOI= 10.1561/1500000012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Kelly, D., Harper, D.J. and Landau, B. 2008. Questionnaire mode effects in interactive information retrieval experiments. Inform. Process. Manage. 44, 1 (January. 2008), 122--141. DOI=10.1016/j.ipm.2007.02.007 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Kohavi, R., Longbotham, R., Sommerfield, D. and Henne, R. M. (2009). Controlled experiments on the web: Survey and practical guide. Data Mining and Knowledge Discovery, 18(1): 140--181. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Lalmas, M., O'Brien, H. and Yom-Tov, E. 2014. Measuring User Engagement. Morgan & Claypool. DOI= 10.2200/S00605ED1V01Y201410ICR038 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. McCay-Peet, L. and Toms, E. G. 2011. Measuring the dimensions of serendipity in digital environments. Information Research: An International Electronic Journal 16, 3 (September 2011): n3. http://www.informationr.net/ir/16--3/paper483.htmlGoogle ScholarGoogle Scholar
  12. Moffat, A., Thomas, P. and Scholer, F. (2013). Users versus models: What observation tells us about effectiveness metrics. In Iyengar, A., et al.,, eds, Proceeding of the 22th International Conference on Information and Knowledge Management, 659--668. ACM Press, New York, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Sakai, T. 2014. Metrics, Statistics, Tests. In Ferro, N., ed, Bridging Between Information Retrieval and Databases PROMISE Winter School 2013, Revised Tutorial Lectures, 116--163. Lecture Notes in Computer Science (LNCS) 8173, Springer, Heidelberg, Germany.Google ScholarGoogle Scholar
  14. Sanderson, M. 2010. Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval, 4(4), 247--375.Google ScholarGoogle ScholarCross RefCross Ref
  15. Scholer, F., Moffat, A. and Thomas, P. 2013. Choices in batch information retrieval evaluation. In Proceedings of the Australasian Document Computing Symposium, 74--81. Brisbane, Australia. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Su, L.T. 1992. Evaluation measures for interactive information retrieval. Inform. Process. Manage. 28, 4 (July-August. 1992), 503--516. DOI= doi: 10.1016/0306-4573(92)90007-M. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Tague-Sutcliffe, J. 1992. Measuring the informativeness of a retrieval process. In Proceedings of the 15th annual International ACM SIGIR Conference on Research and Development in Information Retrieval (Copenhagen, Denmark, June 21-24, 1992). ACM, New York, NY, 23--26. DOI = 10.1145/133160.133171. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. System And User Centered Evaluation Approaches in Interactive Information Retrieval (SAUCE 2016)

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                CHIIR '16: Proceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval
                March 2016
                400 pages
                ISBN:9781450337519
                DOI:10.1145/2854946

                Copyright © 2016 Owner/Author

                Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 13 March 2016

                Check for updates

                Qualifiers

                • abstract

                Acceptance Rates

                CHIIR '16 Paper Acceptance Rate23of58submissions,40%Overall Acceptance Rate55of163submissions,34%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader